Test Report: KVM_Linux_crio 20083

                    
                      6c4fcf300662436f71bcf8696a35dd22d9fca43a:2024-12-12:37445
                    
                

Test fail (31/320)

Order failed test Duration
36 TestAddons/parallel/Ingress 156.75
38 TestAddons/parallel/MetricsServer 360.31
47 TestAddons/StoppedEnableDisable 154.31
166 TestMultiControlPlane/serial/StopSecondaryNode 141.6
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.77
168 TestMultiControlPlane/serial/RestartSecondaryNode 6.3
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.18
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.58
173 TestMultiControlPlane/serial/StopCluster 142.32
233 TestMultiNode/serial/RestartKeepsNodes 325.27
235 TestMultiNode/serial/StopMultiNode 145.15
242 TestPreload 212.45
250 TestKubernetesUpgrade 411.2
293 TestStartStop/group/old-k8s-version/serial/FirstStart 288.88
301 TestStartStop/group/no-preload/serial/Stop 139.3
305 TestStartStop/group/embed-certs/serial/Stop 139.07
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.02
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/DeployApp 0.48
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 106.49
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
319 TestStartStop/group/old-k8s-version/serial/SecondStart 730.52
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.24
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.14
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.21
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.46
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 415.07
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 476.73
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 324.16
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 131.06
x
+
TestAddons/parallel/Ingress (156.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-021354 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-021354 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-021354 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [264cded5-669e-4c91-a0aa-800234ac799a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [264cded5-669e-4c91-a0aa-800234ac799a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004800825s
I1211 23:38:52.367369   93600 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-021354 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.374551472s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-021354 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.225
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-021354 -n addons-021354
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 logs -n 25: (1.295979996s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| delete  | -p download-only-596435                                                                     | download-only-596435 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| delete  | -p download-only-531520                                                                     | download-only-531520 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| delete  | -p download-only-596435                                                                     | download-only-596435 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-922560 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC |                     |
	|         | binary-mirror-922560                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-922560                                                                     | binary-mirror-922560 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC |                     |
	|         | addons-021354                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC |                     |
	|         | addons-021354                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-021354 --wait=true                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:37 UTC | 11 Dec 24 23:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | -p addons-021354                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-021354 ip                                                                            | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-021354 ssh cat                                                                       | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | /opt/local-path-provisioner/pvc-6ce29942-9383-4c5e-b256-1d3d7149a74d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-021354 ssh curl -s                                                                   | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:39 UTC | 11 Dec 24 23:39 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:39 UTC | 11 Dec 24 23:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-021354 ip                                                                            | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:41 UTC | 11 Dec 24 23:41 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:34:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:34:17.941564   94369 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:34:17.941676   94369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:34:17.941686   94369 out.go:358] Setting ErrFile to fd 2...
	I1211 23:34:17.941691   94369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:34:17.941851   94369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:34:17.942483   94369 out.go:352] Setting JSON to false
	I1211 23:34:17.943337   94369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8200,"bootTime":1733951858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:34:17.943396   94369 start.go:139] virtualization: kvm guest
	I1211 23:34:17.945493   94369 out.go:177] * [addons-021354] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:34:17.946823   94369 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:34:17.946892   94369 notify.go:220] Checking for updates...
	I1211 23:34:17.949318   94369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:34:17.950585   94369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:34:17.951834   94369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:34:17.953374   94369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:34:17.954508   94369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:34:17.955834   94369 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:34:17.989182   94369 out.go:177] * Using the kvm2 driver based on user configuration
	I1211 23:34:17.990314   94369 start.go:297] selected driver: kvm2
	I1211 23:34:17.990327   94369 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:34:17.990341   94369 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:34:17.991051   94369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:34:17.991142   94369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:34:18.006119   94369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:34:18.006171   94369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:34:18.006426   94369 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:34:18.006457   94369 cni.go:84] Creating CNI manager for ""
	I1211 23:34:18.006500   94369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:34:18.006512   94369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:34:18.006555   94369 start.go:340] cluster config:
	{Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:34:18.006677   94369 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:34:18.009108   94369 out.go:177] * Starting "addons-021354" primary control-plane node in "addons-021354" cluster
	I1211 23:34:18.010222   94369 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:34:18.010273   94369 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:34:18.010280   94369 cache.go:56] Caching tarball of preloaded images
	I1211 23:34:18.010368   94369 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:34:18.010379   94369 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:34:18.010690   94369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/config.json ...
	I1211 23:34:18.010710   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/config.json: {Name:mk5187adff29800e1ee3705d8e7a6af6bc743940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:18.010857   94369 start.go:360] acquireMachinesLock for addons-021354: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:34:18.010901   94369 start.go:364] duration metric: took 30.807µs to acquireMachinesLock for "addons-021354"
	I1211 23:34:18.010919   94369 start.go:93] Provisioning new machine with config: &{Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:34:18.010985   94369 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:34:18.013328   94369 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1211 23:34:18.013517   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:34:18.013561   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:34:18.028613   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I1211 23:34:18.029137   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:34:18.029680   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:34:18.029700   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:34:18.030091   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:34:18.030252   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:18.030393   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:18.030497   94369 start.go:159] libmachine.API.Create for "addons-021354" (driver="kvm2")
	I1211 23:34:18.030523   94369 client.go:168] LocalClient.Create starting
	I1211 23:34:18.030561   94369 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:34:18.100453   94369 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:34:18.277891   94369 main.go:141] libmachine: Running pre-create checks...
	I1211 23:34:18.277918   94369 main.go:141] libmachine: (addons-021354) Calling .PreCreateCheck
	I1211 23:34:18.278482   94369 main.go:141] libmachine: (addons-021354) Calling .GetConfigRaw
	I1211 23:34:18.279015   94369 main.go:141] libmachine: Creating machine...
	I1211 23:34:18.279035   94369 main.go:141] libmachine: (addons-021354) Calling .Create
	I1211 23:34:18.279259   94369 main.go:141] libmachine: (addons-021354) Creating KVM machine...
	I1211 23:34:18.280592   94369 main.go:141] libmachine: (addons-021354) DBG | found existing default KVM network
	I1211 23:34:18.281485   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.281311   94392 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I1211 23:34:18.281535   94369 main.go:141] libmachine: (addons-021354) DBG | created network xml: 
	I1211 23:34:18.281559   94369 main.go:141] libmachine: (addons-021354) DBG | <network>
	I1211 23:34:18.281569   94369 main.go:141] libmachine: (addons-021354) DBG |   <name>mk-addons-021354</name>
	I1211 23:34:18.281577   94369 main.go:141] libmachine: (addons-021354) DBG |   <dns enable='no'/>
	I1211 23:34:18.281583   94369 main.go:141] libmachine: (addons-021354) DBG |   
	I1211 23:34:18.281590   94369 main.go:141] libmachine: (addons-021354) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1211 23:34:18.281595   94369 main.go:141] libmachine: (addons-021354) DBG |     <dhcp>
	I1211 23:34:18.281600   94369 main.go:141] libmachine: (addons-021354) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1211 23:34:18.281606   94369 main.go:141] libmachine: (addons-021354) DBG |     </dhcp>
	I1211 23:34:18.281613   94369 main.go:141] libmachine: (addons-021354) DBG |   </ip>
	I1211 23:34:18.281621   94369 main.go:141] libmachine: (addons-021354) DBG |   
	I1211 23:34:18.281625   94369 main.go:141] libmachine: (addons-021354) DBG | </network>
	I1211 23:34:18.281631   94369 main.go:141] libmachine: (addons-021354) DBG | 
	I1211 23:34:18.286957   94369 main.go:141] libmachine: (addons-021354) DBG | trying to create private KVM network mk-addons-021354 192.168.39.0/24...
	I1211 23:34:18.357480   94369 main.go:141] libmachine: (addons-021354) DBG | private KVM network mk-addons-021354 192.168.39.0/24 created
	I1211 23:34:18.357507   94369 main.go:141] libmachine: (addons-021354) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354 ...
	I1211 23:34:18.357532   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.357422   94392 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:34:18.357543   94369 main.go:141] libmachine: (addons-021354) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:34:18.357558   94369 main.go:141] libmachine: (addons-021354) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:34:18.643058   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.642882   94392 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa...
	I1211 23:34:18.745328   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.745187   94392 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/addons-021354.rawdisk...
	I1211 23:34:18.745376   94369 main.go:141] libmachine: (addons-021354) DBG | Writing magic tar header
	I1211 23:34:18.745386   94369 main.go:141] libmachine: (addons-021354) DBG | Writing SSH key tar header
	I1211 23:34:18.745393   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.745317   94392 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354 ...
	I1211 23:34:18.745414   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354
	I1211 23:34:18.745428   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354 (perms=drwx------)
	I1211 23:34:18.745447   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:34:18.745454   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:34:18.745459   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:34:18.745468   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:34:18.745477   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:34:18.745483   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:34:18.745491   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:34:18.745515   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:34:18.745519   94369 main.go:141] libmachine: (addons-021354) Creating domain...
	I1211 23:34:18.745576   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:34:18.745606   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:34:18.745620   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home
	I1211 23:34:18.745630   94369 main.go:141] libmachine: (addons-021354) DBG | Skipping /home - not owner
	I1211 23:34:18.746981   94369 main.go:141] libmachine: (addons-021354) define libvirt domain using xml: 
	I1211 23:34:18.747019   94369 main.go:141] libmachine: (addons-021354) <domain type='kvm'>
	I1211 23:34:18.747030   94369 main.go:141] libmachine: (addons-021354)   <name>addons-021354</name>
	I1211 23:34:18.747037   94369 main.go:141] libmachine: (addons-021354)   <memory unit='MiB'>4000</memory>
	I1211 23:34:18.747045   94369 main.go:141] libmachine: (addons-021354)   <vcpu>2</vcpu>
	I1211 23:34:18.747054   94369 main.go:141] libmachine: (addons-021354)   <features>
	I1211 23:34:18.747073   94369 main.go:141] libmachine: (addons-021354)     <acpi/>
	I1211 23:34:18.747087   94369 main.go:141] libmachine: (addons-021354)     <apic/>
	I1211 23:34:18.747108   94369 main.go:141] libmachine: (addons-021354)     <pae/>
	I1211 23:34:18.747118   94369 main.go:141] libmachine: (addons-021354)     
	I1211 23:34:18.747128   94369 main.go:141] libmachine: (addons-021354)   </features>
	I1211 23:34:18.747137   94369 main.go:141] libmachine: (addons-021354)   <cpu mode='host-passthrough'>
	I1211 23:34:18.747143   94369 main.go:141] libmachine: (addons-021354)   
	I1211 23:34:18.747161   94369 main.go:141] libmachine: (addons-021354)   </cpu>
	I1211 23:34:18.747172   94369 main.go:141] libmachine: (addons-021354)   <os>
	I1211 23:34:18.747185   94369 main.go:141] libmachine: (addons-021354)     <type>hvm</type>
	I1211 23:34:18.747219   94369 main.go:141] libmachine: (addons-021354)     <boot dev='cdrom'/>
	I1211 23:34:18.747246   94369 main.go:141] libmachine: (addons-021354)     <boot dev='hd'/>
	I1211 23:34:18.747279   94369 main.go:141] libmachine: (addons-021354)     <bootmenu enable='no'/>
	I1211 23:34:18.747298   94369 main.go:141] libmachine: (addons-021354)   </os>
	I1211 23:34:18.747307   94369 main.go:141] libmachine: (addons-021354)   <devices>
	I1211 23:34:18.747315   94369 main.go:141] libmachine: (addons-021354)     <disk type='file' device='cdrom'>
	I1211 23:34:18.747323   94369 main.go:141] libmachine: (addons-021354)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/boot2docker.iso'/>
	I1211 23:34:18.747330   94369 main.go:141] libmachine: (addons-021354)       <target dev='hdc' bus='scsi'/>
	I1211 23:34:18.747335   94369 main.go:141] libmachine: (addons-021354)       <readonly/>
	I1211 23:34:18.747342   94369 main.go:141] libmachine: (addons-021354)     </disk>
	I1211 23:34:18.747348   94369 main.go:141] libmachine: (addons-021354)     <disk type='file' device='disk'>
	I1211 23:34:18.747355   94369 main.go:141] libmachine: (addons-021354)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:34:18.747363   94369 main.go:141] libmachine: (addons-021354)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/addons-021354.rawdisk'/>
	I1211 23:34:18.747370   94369 main.go:141] libmachine: (addons-021354)       <target dev='hda' bus='virtio'/>
	I1211 23:34:18.747375   94369 main.go:141] libmachine: (addons-021354)     </disk>
	I1211 23:34:18.747384   94369 main.go:141] libmachine: (addons-021354)     <interface type='network'>
	I1211 23:34:18.747390   94369 main.go:141] libmachine: (addons-021354)       <source network='mk-addons-021354'/>
	I1211 23:34:18.747400   94369 main.go:141] libmachine: (addons-021354)       <model type='virtio'/>
	I1211 23:34:18.747405   94369 main.go:141] libmachine: (addons-021354)     </interface>
	I1211 23:34:18.747416   94369 main.go:141] libmachine: (addons-021354)     <interface type='network'>
	I1211 23:34:18.747428   94369 main.go:141] libmachine: (addons-021354)       <source network='default'/>
	I1211 23:34:18.747435   94369 main.go:141] libmachine: (addons-021354)       <model type='virtio'/>
	I1211 23:34:18.747440   94369 main.go:141] libmachine: (addons-021354)     </interface>
	I1211 23:34:18.747446   94369 main.go:141] libmachine: (addons-021354)     <serial type='pty'>
	I1211 23:34:18.747463   94369 main.go:141] libmachine: (addons-021354)       <target port='0'/>
	I1211 23:34:18.747469   94369 main.go:141] libmachine: (addons-021354)     </serial>
	I1211 23:34:18.747475   94369 main.go:141] libmachine: (addons-021354)     <console type='pty'>
	I1211 23:34:18.747484   94369 main.go:141] libmachine: (addons-021354)       <target type='serial' port='0'/>
	I1211 23:34:18.747490   94369 main.go:141] libmachine: (addons-021354)     </console>
	I1211 23:34:18.747496   94369 main.go:141] libmachine: (addons-021354)     <rng model='virtio'>
	I1211 23:34:18.747502   94369 main.go:141] libmachine: (addons-021354)       <backend model='random'>/dev/random</backend>
	I1211 23:34:18.747508   94369 main.go:141] libmachine: (addons-021354)     </rng>
	I1211 23:34:18.747512   94369 main.go:141] libmachine: (addons-021354)     
	I1211 23:34:18.747518   94369 main.go:141] libmachine: (addons-021354)     
	I1211 23:34:18.747522   94369 main.go:141] libmachine: (addons-021354)   </devices>
	I1211 23:34:18.747528   94369 main.go:141] libmachine: (addons-021354) </domain>
	I1211 23:34:18.747536   94369 main.go:141] libmachine: (addons-021354) 
	I1211 23:34:18.752053   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:19:d9:41 in network default
	I1211 23:34:18.752719   94369 main.go:141] libmachine: (addons-021354) Ensuring networks are active...
	I1211 23:34:18.752748   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:18.753643   94369 main.go:141] libmachine: (addons-021354) Ensuring network default is active
	I1211 23:34:18.754178   94369 main.go:141] libmachine: (addons-021354) Ensuring network mk-addons-021354 is active
	I1211 23:34:18.754729   94369 main.go:141] libmachine: (addons-021354) Getting domain xml...
	I1211 23:34:18.755564   94369 main.go:141] libmachine: (addons-021354) Creating domain...
	I1211 23:34:19.961705   94369 main.go:141] libmachine: (addons-021354) Waiting to get IP...
	I1211 23:34:19.962505   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:19.962873   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:19.962908   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:19.962860   94392 retry.go:31] will retry after 218.55825ms: waiting for machine to come up
	I1211 23:34:20.183538   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:20.183998   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:20.184029   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:20.183958   94392 retry.go:31] will retry after 278.620642ms: waiting for machine to come up
	I1211 23:34:20.464621   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:20.465135   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:20.465158   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:20.465090   94392 retry.go:31] will retry after 457.396089ms: waiting for machine to come up
	I1211 23:34:20.923898   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:20.924379   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:20.924405   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:20.924344   94392 retry.go:31] will retry after 367.140818ms: waiting for machine to come up
	I1211 23:34:21.292951   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:21.293415   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:21.293444   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:21.293366   94392 retry.go:31] will retry after 528.658319ms: waiting for machine to come up
	I1211 23:34:21.824318   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:21.824736   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:21.824760   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:21.824703   94392 retry.go:31] will retry after 693.958686ms: waiting for machine to come up
	I1211 23:34:22.520831   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:22.521279   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:22.521310   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:22.521224   94392 retry.go:31] will retry after 1.049432061s: waiting for machine to come up
	I1211 23:34:23.571993   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:23.572530   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:23.572561   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:23.572469   94392 retry.go:31] will retry after 1.299191566s: waiting for machine to come up
	I1211 23:34:24.874165   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:24.874604   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:24.874624   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:24.874550   94392 retry.go:31] will retry after 1.848004594s: waiting for machine to come up
	I1211 23:34:26.724008   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:26.724509   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:26.724535   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:26.724456   94392 retry.go:31] will retry after 2.062176111s: waiting for machine to come up
	I1211 23:34:28.787705   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:28.788119   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:28.788141   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:28.788070   94392 retry.go:31] will retry after 2.215274562s: waiting for machine to come up
	I1211 23:34:31.006847   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:31.007373   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:31.007401   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:31.007334   94392 retry.go:31] will retry after 2.679029007s: waiting for machine to come up
	I1211 23:34:33.688071   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:33.688469   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:33.688492   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:33.688427   94392 retry.go:31] will retry after 4.244655837s: waiting for machine to come up
	I1211 23:34:37.937787   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:37.938128   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:37.938153   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:37.938085   94392 retry.go:31] will retry after 3.67328737s: waiting for machine to come up
	I1211 23:34:41.615770   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.616215   94369 main.go:141] libmachine: (addons-021354) Found IP for machine: 192.168.39.225
	I1211 23:34:41.616231   94369 main.go:141] libmachine: (addons-021354) Reserving static IP address...
	I1211 23:34:41.616241   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has current primary IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.616643   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find host DHCP lease matching {name: "addons-021354", mac: "52:54:00:f7:1d:ff", ip: "192.168.39.225"} in network mk-addons-021354
	I1211 23:34:41.691586   94369 main.go:141] libmachine: (addons-021354) DBG | Getting to WaitForSSH function...
	I1211 23:34:41.691641   94369 main.go:141] libmachine: (addons-021354) Reserved static IP address: 192.168.39.225
	I1211 23:34:41.691654   94369 main.go:141] libmachine: (addons-021354) Waiting for SSH to be available...
	I1211 23:34:41.694518   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.695077   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:41.695106   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.695302   94369 main.go:141] libmachine: (addons-021354) DBG | Using SSH client type: external
	I1211 23:34:41.695329   94369 main.go:141] libmachine: (addons-021354) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa (-rw-------)
	I1211 23:34:41.695375   94369 main.go:141] libmachine: (addons-021354) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:34:41.695390   94369 main.go:141] libmachine: (addons-021354) DBG | About to run SSH command:
	I1211 23:34:41.695401   94369 main.go:141] libmachine: (addons-021354) DBG | exit 0
	I1211 23:34:41.819929   94369 main.go:141] libmachine: (addons-021354) DBG | SSH cmd err, output: <nil>: 
	I1211 23:34:41.820213   94369 main.go:141] libmachine: (addons-021354) KVM machine creation complete!
	I1211 23:34:41.820583   94369 main.go:141] libmachine: (addons-021354) Calling .GetConfigRaw
	I1211 23:34:41.821223   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:41.821411   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:41.821624   94369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1211 23:34:41.821644   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:34:41.823336   94369 main.go:141] libmachine: Detecting operating system of created instance...
	I1211 23:34:41.823377   94369 main.go:141] libmachine: Waiting for SSH to be available...
	I1211 23:34:41.823383   94369 main.go:141] libmachine: Getting to WaitForSSH function...
	I1211 23:34:41.823389   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:41.826349   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.826693   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:41.826723   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.826856   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:41.827049   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.827231   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.827385   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:41.827552   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:41.827781   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:41.827796   94369 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1211 23:34:41.934931   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:34:41.934959   94369 main.go:141] libmachine: Detecting the provisioner...
	I1211 23:34:41.934972   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:41.937932   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.938305   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:41.938340   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.938483   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:41.938717   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.938872   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.939025   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:41.939203   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:41.939388   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:41.939399   94369 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1211 23:34:42.048570   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1211 23:34:42.048701   94369 main.go:141] libmachine: found compatible host: buildroot
	I1211 23:34:42.048712   94369 main.go:141] libmachine: Provisioning with buildroot...
	I1211 23:34:42.048721   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:42.048996   94369 buildroot.go:166] provisioning hostname "addons-021354"
	I1211 23:34:42.049026   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:42.049237   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.052181   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.052558   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.052584   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.052722   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.052907   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.053149   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.053326   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.053503   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.053683   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.053694   94369 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-021354 && echo "addons-021354" | sudo tee /etc/hostname
	I1211 23:34:42.174313   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-021354
	
	I1211 23:34:42.174385   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.177253   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.177597   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.177621   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.177816   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.177975   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.178096   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.178207   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.178385   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.178559   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.178574   94369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-021354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-021354/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-021354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:34:42.293305   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:34:42.293343   94369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1211 23:34:42.293405   94369 buildroot.go:174] setting up certificates
	I1211 23:34:42.293426   94369 provision.go:84] configureAuth start
	I1211 23:34:42.293440   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:42.293761   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:42.296271   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.296595   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.296633   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.296853   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.299029   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.299361   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.299403   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.299508   94369 provision.go:143] copyHostCerts
	I1211 23:34:42.299587   94369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1211 23:34:42.299791   94369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1211 23:34:42.299896   94369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1211 23:34:42.300021   94369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.addons-021354 san=[127.0.0.1 192.168.39.225 addons-021354 localhost minikube]
	I1211 23:34:42.379626   94369 provision.go:177] copyRemoteCerts
	I1211 23:34:42.379701   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:34:42.379729   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.382464   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.382775   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.382804   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.383011   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.383212   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.383386   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.383532   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:42.466523   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 23:34:42.491645   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:34:42.515877   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 23:34:42.540074   94369 provision.go:87] duration metric: took 246.632691ms to configureAuth
	I1211 23:34:42.540124   94369 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:34:42.540357   94369 config.go:182] Loaded profile config "addons-021354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:34:42.540479   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.543110   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.543450   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.543484   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.543684   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.543877   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.544034   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.544152   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.544297   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.544455   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.544469   94369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:34:42.771254   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:34:42.771290   94369 main.go:141] libmachine: Checking connection to Docker...
	I1211 23:34:42.771298   94369 main.go:141] libmachine: (addons-021354) Calling .GetURL
	I1211 23:34:42.772695   94369 main.go:141] libmachine: (addons-021354) DBG | Using libvirt version 6000000
	I1211 23:34:42.774805   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.775131   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.775164   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.775353   94369 main.go:141] libmachine: Docker is up and running!
	I1211 23:34:42.775371   94369 main.go:141] libmachine: Reticulating splines...
	I1211 23:34:42.775382   94369 client.go:171] duration metric: took 24.744846556s to LocalClient.Create
	I1211 23:34:42.775408   94369 start.go:167] duration metric: took 24.744911505s to libmachine.API.Create "addons-021354"
	I1211 23:34:42.775429   94369 start.go:293] postStartSetup for "addons-021354" (driver="kvm2")
	I1211 23:34:42.775443   94369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:34:42.775467   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.775735   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:34:42.775762   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.777894   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.778200   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.778231   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.778355   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.778522   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.778652   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.778778   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:42.862454   94369 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:34:42.866931   94369 info.go:137] Remote host: Buildroot 2023.02.9
	I1211 23:34:42.866959   94369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1211 23:34:42.867067   94369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1211 23:34:42.867093   94369 start.go:296] duration metric: took 91.655173ms for postStartSetup
	I1211 23:34:42.867139   94369 main.go:141] libmachine: (addons-021354) Calling .GetConfigRaw
	I1211 23:34:42.867784   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:42.870295   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.870709   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.870738   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.870975   94369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/config.json ...
	I1211 23:34:42.871155   94369 start.go:128] duration metric: took 24.860159392s to createHost
	I1211 23:34:42.871182   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.873555   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.873901   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.873935   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.874051   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.874253   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.874445   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.874573   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.874744   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.874898   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.874908   94369 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:34:42.980743   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733960082.946357302
	
	I1211 23:34:42.980778   94369 fix.go:216] guest clock: 1733960082.946357302
	I1211 23:34:42.980787   94369 fix.go:229] Guest: 2024-12-11 23:34:42.946357302 +0000 UTC Remote: 2024-12-11 23:34:42.871169504 +0000 UTC m=+24.967954718 (delta=75.187798ms)
	I1211 23:34:42.980827   94369 fix.go:200] guest clock delta is within tolerance: 75.187798ms
	I1211 23:34:42.980835   94369 start.go:83] releasing machines lock for "addons-021354", held for 24.969923936s
	I1211 23:34:42.980858   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.981203   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:42.983909   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.984245   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.984273   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.984407   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.984897   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.985080   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.985188   94369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:34:42.985247   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.985295   94369 ssh_runner.go:195] Run: cat /version.json
	I1211 23:34:42.985323   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.987869   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988121   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988211   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.988239   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988341   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.988435   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.988504   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988522   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.988588   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.988659   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.988757   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.988784   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:42.988874   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.988994   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:43.091309   94369 ssh_runner.go:195] Run: systemctl --version
	I1211 23:34:43.097573   94369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:34:43.255661   94369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:34:43.262937   94369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:34:43.263022   94369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:34:43.279351   94369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:34:43.279386   94369 start.go:495] detecting cgroup driver to use...
	I1211 23:34:43.279468   94369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:34:43.294921   94369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:34:43.309278   94369 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:34:43.309335   94369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:34:43.323080   94369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:34:43.336590   94369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:34:43.452359   94369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:34:43.617934   94369 docker.go:233] disabling docker service ...
	I1211 23:34:43.618010   94369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:34:43.632379   94369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:34:43.645493   94369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:34:43.779942   94369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:34:43.905781   94369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:34:43.920043   94369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:34:43.938665   94369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:34:43.938740   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.949165   94369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:34:43.949252   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.959721   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.969897   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.980430   94369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:34:43.991220   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:44.001386   94369 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:44.018913   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:44.029279   94369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:34:44.038630   94369 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:34:44.038683   94369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:34:44.051397   94369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:34:44.061670   94369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:34:44.183084   94369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:34:44.273357   94369 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:34:44.273444   94369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:34:44.278366   94369 start.go:563] Will wait 60s for crictl version
	I1211 23:34:44.278436   94369 ssh_runner.go:195] Run: which crictl
	I1211 23:34:44.282275   94369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:34:44.325117   94369 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:34:44.325238   94369 ssh_runner.go:195] Run: crio --version
	I1211 23:34:44.353131   94369 ssh_runner.go:195] Run: crio --version
	I1211 23:34:44.383439   94369 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1211 23:34:44.385013   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:44.387971   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:44.388320   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:44.388352   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:44.388617   94369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:34:44.393042   94369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:34:44.406459   94369 kubeadm.go:883] updating cluster {Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:34:44.406571   94369 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:34:44.406621   94369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:34:44.440300   94369 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1211 23:34:44.440371   94369 ssh_runner.go:195] Run: which lz4
	I1211 23:34:44.444596   94369 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:34:44.448992   94369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:34:44.449028   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1211 23:34:45.749547   94369 crio.go:462] duration metric: took 1.304999714s to copy over tarball
	I1211 23:34:45.749631   94369 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:34:47.887053   94369 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.137380299s)
	I1211 23:34:47.887097   94369 crio.go:469] duration metric: took 2.137514144s to extract the tarball
	I1211 23:34:47.887111   94369 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:34:47.925261   94369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:34:47.970564   94369 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:34:47.970598   94369 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:34:47.970610   94369 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.2 crio true true} ...
	I1211 23:34:47.970780   94369 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-021354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:34:47.970886   94369 ssh_runner.go:195] Run: crio config
	I1211 23:34:48.019180   94369 cni.go:84] Creating CNI manager for ""
	I1211 23:34:48.019206   94369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:34:48.019220   94369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:34:48.019241   94369 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-021354 NodeName:addons-021354 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:34:48.019387   94369 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-021354"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.225"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:34:48.019464   94369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:34:48.030214   94369 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:34:48.030305   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:34:48.040711   94369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1211 23:34:48.058366   94369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:34:48.075616   94369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1211 23:34:48.093085   94369 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I1211 23:34:48.097302   94369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:34:48.110395   94369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:34:48.234458   94369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:34:48.251528   94369 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354 for IP: 192.168.39.225
	I1211 23:34:48.251553   94369 certs.go:194] generating shared ca certs ...
	I1211 23:34:48.251570   94369 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.251738   94369 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1211 23:34:48.320769   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt ...
	I1211 23:34:48.320801   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt: {Name:mk18b608077b42fcba0e790a13db29beca86d40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.321010   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key ...
	I1211 23:34:48.321030   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key: {Name:mk2b0a248c0dc5d6780db8d7389e3ce61a08ccca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.321149   94369 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1211 23:34:48.534784   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt ...
	I1211 23:34:48.534817   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt: {Name:mkb7f6f01c296a3f917af5c8a02f5476362bdc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.535023   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key ...
	I1211 23:34:48.535042   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key: {Name:mk7dfc75f1bbd84ca395fd67ba0905ce60c57d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.535168   94369 certs.go:256] generating profile certs ...
	I1211 23:34:48.535264   94369 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.key
	I1211 23:34:48.535285   94369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt with IP's: []
	I1211 23:34:48.753672   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt ...
	I1211 23:34:48.753707   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: {Name:mk66efaed89910931834575b7294af4c2524ef5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.753901   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.key ...
	I1211 23:34:48.753933   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.key: {Name:mkae769d04795e681b2a27f0079fb20a11c3e804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.754055   94369 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e
	I1211 23:34:48.754082   94369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I1211 23:34:48.844857   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e ...
	I1211 23:34:48.844890   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e: {Name:mk77eeaebd337092de4f92552ce2038f6245cb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.845075   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e ...
	I1211 23:34:48.845099   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e: {Name:mk7e974e7014007323859a66a208a75ba3d46736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.845193   94369 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt
	I1211 23:34:48.845296   94369 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key
	I1211 23:34:48.845367   94369 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key
	I1211 23:34:48.845390   94369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt with IP's: []
	I1211 23:34:48.905793   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt ...
	I1211 23:34:48.905822   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt: {Name:mkba9992178a3089f32a431c493454ddca2f3a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.906004   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key ...
	I1211 23:34:48.906028   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key: {Name:mkfc139c2f20d1c2a8344c445c646ad58142ed8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.906265   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:34:48.906308   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1211 23:34:48.906355   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:34:48.906392   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1211 23:34:48.907110   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:34:48.945595   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:34:48.974831   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:34:49.002072   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:34:49.026606   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:34:49.051242   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 23:34:49.075778   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:34:49.100139   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:34:49.124995   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:34:49.149379   94369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:34:49.166340   94369 ssh_runner.go:195] Run: openssl version
	I1211 23:34:49.172319   94369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:34:49.183032   94369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:34:49.187622   94369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:34:49.187700   94369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:34:49.193473   94369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:34:49.204472   94369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:34:49.208779   94369 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:34:49.208842   94369 kubeadm.go:392] StartCluster: {Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:34:49.208957   94369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:34:49.209033   94369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:34:49.244486   94369 cri.go:89] found id: ""
	I1211 23:34:49.244584   94369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:34:49.254780   94369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:34:49.265481   94369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:34:49.277627   94369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:34:49.277650   94369 kubeadm.go:157] found existing configuration files:
	
	I1211 23:34:49.277699   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:34:49.287135   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:34:49.287200   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:34:49.296941   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:34:49.306402   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:34:49.306469   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:34:49.316405   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:34:49.325919   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:34:49.325988   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:34:49.335575   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:34:49.344724   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:34:49.344787   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:34:49.354496   94369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:34:49.518965   94369 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:35:00.177960   94369 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:35:00.178054   94369 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:35:00.178136   94369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:35:00.178217   94369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:35:00.178295   94369 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:35:00.178418   94369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:35:00.180050   94369 out.go:235]   - Generating certificates and keys ...
	I1211 23:35:00.180154   94369 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:35:00.180222   94369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:35:00.180290   94369 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:35:00.180338   94369 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:35:00.180389   94369 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:35:00.180456   94369 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:35:00.180539   94369 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:35:00.180703   94369 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-021354 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I1211 23:35:00.180759   94369 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:35:00.180879   94369 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-021354 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I1211 23:35:00.180938   94369 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:35:00.181005   94369 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:35:00.181050   94369 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:35:00.181102   94369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:35:00.181146   94369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:35:00.181198   94369 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:35:00.181247   94369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:35:00.181313   94369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:35:00.181420   94369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:35:00.181545   94369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:35:00.181647   94369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:35:00.182986   94369 out.go:235]   - Booting up control plane ...
	I1211 23:35:00.183083   94369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:35:00.183170   94369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:35:00.183268   94369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:35:00.183397   94369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:35:00.183487   94369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:35:00.183521   94369 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:35:00.183654   94369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:35:00.183746   94369 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:35:00.183804   94369 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00206526s
	I1211 23:35:00.183891   94369 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:35:00.183965   94369 kubeadm.go:310] [api-check] The API server is healthy after 5.002956694s
	I1211 23:35:00.184093   94369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:35:00.184260   94369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:35:00.184359   94369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:35:00.184582   94369 kubeadm.go:310] [mark-control-plane] Marking the node addons-021354 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:35:00.184668   94369 kubeadm.go:310] [bootstrap-token] Using token: fkc42n.k8j80h5ids5wbhf0
	I1211 23:35:00.186809   94369 out.go:235]   - Configuring RBAC rules ...
	I1211 23:35:00.186933   94369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:35:00.187022   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:35:00.187170   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:35:00.187328   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:35:00.187475   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:35:00.187623   94369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:35:00.187727   94369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:35:00.187780   94369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:35:00.187854   94369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:35:00.187864   94369 kubeadm.go:310] 
	I1211 23:35:00.187945   94369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:35:00.187959   94369 kubeadm.go:310] 
	I1211 23:35:00.188109   94369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:35:00.188125   94369 kubeadm.go:310] 
	I1211 23:35:00.188166   94369 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:35:00.188253   94369 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:35:00.188336   94369 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:35:00.188345   94369 kubeadm.go:310] 
	I1211 23:35:00.188421   94369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:35:00.188431   94369 kubeadm.go:310] 
	I1211 23:35:00.188504   94369 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:35:00.188512   94369 kubeadm.go:310] 
	I1211 23:35:00.188592   94369 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:35:00.188707   94369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:35:00.188765   94369 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:35:00.188771   94369 kubeadm.go:310] 
	I1211 23:35:00.188843   94369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:35:00.188906   94369 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:35:00.188912   94369 kubeadm.go:310] 
	I1211 23:35:00.189009   94369 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fkc42n.k8j80h5ids5wbhf0 \
	I1211 23:35:00.189102   94369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1211 23:35:00.189122   94369 kubeadm.go:310] 	--control-plane 
	I1211 23:35:00.189129   94369 kubeadm.go:310] 
	I1211 23:35:00.189196   94369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:35:00.189203   94369 kubeadm.go:310] 
	I1211 23:35:00.189267   94369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fkc42n.k8j80h5ids5wbhf0 \
	I1211 23:35:00.189376   94369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1211 23:35:00.189387   94369 cni.go:84] Creating CNI manager for ""
	I1211 23:35:00.189397   94369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:35:00.190908   94369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 23:35:00.192283   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 23:35:00.203788   94369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 23:35:00.225264   94369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:35:00.225411   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:00.225418   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-021354 minikube.k8s.io/updated_at=2024_12_11T23_35_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=addons-021354 minikube.k8s.io/primary=true
	I1211 23:35:00.273435   94369 ops.go:34] apiserver oom_adj: -16
	I1211 23:35:00.355692   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:00.855791   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:01.356376   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:01.856632   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:02.355728   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:02.856575   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:03.356570   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:03.855769   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:04.356625   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:04.856042   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:04.990028   94369 kubeadm.go:1113] duration metric: took 4.764699966s to wait for elevateKubeSystemPrivileges
	I1211 23:35:04.990073   94369 kubeadm.go:394] duration metric: took 15.781235624s to StartCluster
	I1211 23:35:04.990099   94369 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:35:04.990241   94369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:35:04.990639   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:35:04.990855   94369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:35:04.990898   94369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:35:04.991006   94369 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:35:04.991129   94369 config.go:182] Loaded profile config "addons-021354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:35:04.991145   94369 addons.go:69] Setting yakd=true in profile "addons-021354"
	I1211 23:35:04.991149   94369 addons.go:69] Setting default-storageclass=true in profile "addons-021354"
	I1211 23:35:04.991166   94369 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-021354"
	I1211 23:35:04.991175   94369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-021354"
	I1211 23:35:04.991181   94369 addons.go:69] Setting registry=true in profile "addons-021354"
	I1211 23:35:04.991188   94369 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-021354"
	I1211 23:35:04.991195   94369 addons.go:69] Setting storage-provisioner=true in profile "addons-021354"
	I1211 23:35:04.991199   94369 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-021354"
	I1211 23:35:04.991213   94369 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-021354"
	I1211 23:35:04.991224   94369 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-021354"
	I1211 23:35:04.991231   94369 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-021354"
	I1211 23:35:04.991242   94369 addons.go:69] Setting gcp-auth=true in profile "addons-021354"
	I1211 23:35:04.991258   94369 mustload.go:65] Loading cluster: addons-021354
	I1211 23:35:04.991266   94369 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-021354"
	I1211 23:35:04.991269   94369 addons.go:69] Setting volumesnapshots=true in profile "addons-021354"
	I1211 23:35:04.991288   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991301   94369 addons.go:234] Setting addon volumesnapshots=true in "addons-021354"
	I1211 23:35:04.991320   94369 addons.go:69] Setting ingress=true in profile "addons-021354"
	I1211 23:35:04.991341   94369 addons.go:234] Setting addon ingress=true in "addons-021354"
	I1211 23:35:04.991356   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991368   94369 addons.go:69] Setting ingress-dns=true in profile "addons-021354"
	I1211 23:35:04.991385   94369 addons.go:234] Setting addon ingress-dns=true in "addons-021354"
	I1211 23:35:04.991386   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991430   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991456   94369 config.go:182] Loaded profile config "addons-021354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:35:04.991702   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991722   94369 addons.go:69] Setting volcano=true in profile "addons-021354"
	I1211 23:35:04.991190   94369 addons.go:234] Setting addon registry=true in "addons-021354"
	I1211 23:35:04.991735   94369 addons.go:234] Setting addon volcano=true in "addons-021354"
	I1211 23:35:04.991750   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991780   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991808   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991816   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991824   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991846   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991215   94369 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-021354"
	I1211 23:35:04.991299   94369 addons.go:69] Setting inspektor-gadget=true in profile "addons-021354"
	I1211 23:35:04.991898   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991905   94369 addons.go:234] Setting addon inspektor-gadget=true in "addons-021354"
	I1211 23:35:04.991909   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991928   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991204   94369 addons.go:234] Setting addon storage-provisioner=true in "addons-021354"
	I1211 23:35:04.992227   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991928   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.992287   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991233   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991753   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991705   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.992499   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991190   94369 addons.go:69] Setting cloud-spanner=true in profile "addons-021354"
	I1211 23:35:04.992672   94369 addons.go:234] Setting addon cloud-spanner=true in "addons-021354"
	I1211 23:35:04.992683   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.992703   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.992715   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.992805   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.992837   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991758   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991175   94369 addons.go:234] Setting addon yakd=true in "addons-021354"
	I1211 23:35:04.993062   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.993080   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.993089   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.993246   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.993292   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991768   94369 addons.go:69] Setting metrics-server=true in profile "addons-021354"
	I1211 23:35:04.993382   94369 addons.go:234] Setting addon metrics-server=true in "addons-021354"
	I1211 23:35:04.993423   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.993797   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.993859   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991768   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.996462   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991791   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.996595   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.996631   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.005963   94369 out.go:177] * Verifying Kubernetes components...
	I1211 23:35:04.992261   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.012723   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I1211 23:35:05.012951   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I1211 23:35:05.013311   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.013552   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.013965   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.013995   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.014335   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.014391   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.014411   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.014803   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.015000   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.015019   94369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:35:05.015129   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I1211 23:35:05.015034   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.015196   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.015474   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.016016   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.016040   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.016437   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.018225   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1211 23:35:05.022613   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1211 23:35:05.023934   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.023985   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.024352   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.024400   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.025922   94369 addons.go:234] Setting addon default-storageclass=true in "addons-021354"
	I1211 23:35:05.025990   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:05.026375   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.026419   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.026827   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.026865   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.027372   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.027523   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.027604   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I1211 23:35:05.028068   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.028088   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.028169   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.028858   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.029036   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.029060   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.029222   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.029234   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.029499   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.030120   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.030159   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.030391   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.030471   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.031008   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.031058   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.032392   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:05.032763   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.032799   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.033787   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I1211 23:35:05.046041   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I1211 23:35:05.046759   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.051722   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I1211 23:35:05.051734   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I1211 23:35:05.051856   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.051875   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.052252   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.052378   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.052387   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.053031   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.053056   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.053357   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.053377   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.053444   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.053664   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I1211 23:35:05.053696   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.053664   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.054020   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.054061   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.054370   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.054415   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.054636   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.055135   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.055158   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.055534   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.056875   94369 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-021354"
	I1211 23:35:05.056921   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:05.057287   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.057327   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.058516   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1211 23:35:05.058943   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.059050   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I1211 23:35:05.059466   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.059489   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.059532   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.060293   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.060335   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.060662   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.060682   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.060760   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I1211 23:35:05.061051   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.061210   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.061633   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.061662   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.062300   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.062319   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.065654   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37891
	I1211 23:35:05.066526   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.068493   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.069106   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.069155   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.072221   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.072276   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.072581   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.073211   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.073232   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.073641   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.073907   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.074306   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.074338   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.074488   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.074503   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.074895   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.075453   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.075489   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.092020   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1211 23:35:05.092128   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I1211 23:35:05.092470   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.093092   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.093130   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.093204   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.093620   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.094243   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.094305   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.094558   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I1211 23:35:05.094909   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1211 23:35:05.095093   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.095603   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.095621   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.095682   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.096028   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.096248   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.096263   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.096490   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.096621   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.096633   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.097047   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.097606   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.097673   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.098037   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I1211 23:35:05.098752   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.100016   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1211 23:35:05.100657   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.100671   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.100764   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I1211 23:35:05.101154   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.101242   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:35:05.101412   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.101431   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.101645   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.101662   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.101815   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.102096   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.102134   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.102223   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.102269   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.102822   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.102862   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.102871   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:35:05.102891   94369 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:35:05.102916   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.102999   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.103195   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.103376   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.105170   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.105872   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1211 23:35:05.106270   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.106680   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.107138   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.107156   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.107484   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.107669   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.108306   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.109756   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.110047   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.110277   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.110413   94369 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1211 23:35:05.110533   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.110738   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.110883   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.111002   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.111639   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:35:05.111815   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I1211 23:35:05.111824   94369 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:35:05.111839   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1211 23:35:05.111857   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.112322   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.113148   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.113166   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.113701   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.114239   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.115442   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:35:05.115604   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.116293   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.116373   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.116387   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.116504   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.116728   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.117199   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.117885   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.118037   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.118462   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:35:05.119624   94369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:35:05.119638   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I1211 23:35:05.120131   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.120661   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.120682   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.121035   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.121120   94369 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:35:05.121143   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:35:05.121166   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.121227   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.121624   94369 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1211 23:35:05.122155   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:35:05.123836   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I1211 23:35:05.124273   94369 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:35:05.124291   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:35:05.124310   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.125147   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I1211 23:35:05.125473   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.125743   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.126160   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.126187   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.126239   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.126396   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:35:05.126627   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.126696   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.127210   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.127641   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.127717   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.127719   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.127737   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.128026   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.128193   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.128220   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.128379   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.128643   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.129969   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:35:05.130160   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.130258   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.130822   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.130847   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.131068   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.131210   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.131278   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.131437   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.131732   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.131758   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.132548   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:35:05.133487   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:35:05.134861   94369 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1211 23:35:05.134934   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1211 23:35:05.134938   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:35:05.134970   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I1211 23:35:05.135085   94369 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:35:05.135605   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.136242   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.136261   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.136293   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1211 23:35:05.136751   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.136826   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.136905   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.137087   94369 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:35:05.137105   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:35:05.137123   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.137176   94369 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1211 23:35:05.137185   94369 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1211 23:35:05.137198   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.137244   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:35:05.137255   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:35:05.137269   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.137311   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.137326   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.138301   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.138605   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.138725   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:35:05.140513   94369 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:35:05.140540   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:35:05.140561   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.140786   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.140828   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1211 23:35:05.141377   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.142385   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.142443   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.142496   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.142522   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.143320   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.143339   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143638   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.143378   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:05.143695   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:05.143401   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143717   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.143415   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I1211 23:35:05.143431   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.143869   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143682   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.143889   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143953   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:05.143966   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:05.143974   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.143980   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:05.143989   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:05.143996   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:05.144205   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.144228   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.144279   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.144281   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.144329   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.144405   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.144413   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:05.144426   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	W1211 23:35:05.144533   94369 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:35:05.144823   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.145177   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.145193   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.145312   94369 out.go:177]   - Using image docker.io/registry:2.8.3
	I1211 23:35:05.145553   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.145615   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.145628   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.145829   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.146035   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.146219   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.146330   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.146431   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.147418   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1211 23:35:05.147730   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.147900   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.148284   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.148426   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.148585   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.149036   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.149130   94369 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1211 23:35:05.149248   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.150025   94369 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:35:05.150025   94369 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1211 23:35:05.150904   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.151051   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.151075   94369 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:35:05.151092   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:35:05.151111   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.151467   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.151481   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.151843   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.152131   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.152312   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.152405   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.152692   94369 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1211 23:35:05.152777   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:35:05.152791   94369 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:35:05.152834   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.154204   94369 out.go:177]   - Using image docker.io/busybox:stable
	I1211 23:35:05.154208   94369 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:35:05.154285   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:35:05.154307   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.154855   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.155709   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.155743   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.155871   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.156178   94369 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:35:05.156194   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:35:05.156212   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.156366   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.156380   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.156542   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.156747   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.157366   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.157393   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.157529   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.157702   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.157929   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.157937   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I1211 23:35:05.158351   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.158541   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.158796   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.159332   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.159358   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.159205   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.159374   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.159421   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.159589   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.159753   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.159903   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.160893   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I1211 23:35:05.161350   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.161842   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.161861   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.161863   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.162134   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.162304   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.162443   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.162462   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.162963   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.163120   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.163272   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.163408   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.163564   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.163749   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.164708   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.165378   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.165571   94369 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:35:05.165589   94369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:35:05.165605   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.166808   94369 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:35:05.168289   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:35:05.168306   94369 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:35:05.168322   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.168431   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.168745   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.168773   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.168998   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.169220   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.169436   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.169675   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	W1211 23:35:05.170412   94369 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51768->192.168.39.225:22: read: connection reset by peer
	I1211 23:35:05.170446   94369 retry.go:31] will retry after 211.284806ms: ssh: handshake failed: read tcp 192.168.39.1:51768->192.168.39.225:22: read: connection reset by peer
	I1211 23:35:05.171284   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.171780   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.171843   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.171968   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.172113   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.172226   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.172309   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.367782   94369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:35:05.368078   94369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:35:05.415466   94369 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:35:05.415495   94369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:35:05.424780   94369 node_ready.go:35] waiting up to 6m0s for node "addons-021354" to be "Ready" ...
	I1211 23:35:05.429344   94369 node_ready.go:49] node "addons-021354" has status "Ready":"True"
	I1211 23:35:05.429386   94369 node_ready.go:38] duration metric: took 4.553158ms for node "addons-021354" to be "Ready" ...
	I1211 23:35:05.429396   94369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:35:05.442063   94369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:05.480046   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:35:05.494524   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:35:05.524327   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:35:05.546019   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:35:05.546050   94369 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:35:05.547370   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:35:05.560783   94369 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:35:05.560811   94369 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:35:05.563771   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:35:05.563790   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:35:05.582267   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:35:05.596439   94369 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:35:05.596463   94369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:35:05.598688   94369 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:35:05.598705   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1211 23:35:05.605369   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:35:05.605386   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:35:05.611794   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:35:05.630426   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:35:05.704133   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:35:05.704159   94369 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:35:05.725417   94369 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:35:05.725456   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:35:05.742894   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:35:05.742932   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:35:05.745675   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:35:05.747898   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:35:05.775618   94369 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:35:05.775649   94369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:35:05.791719   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:35:05.791750   94369 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:35:05.899857   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:35:05.899887   94369 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:35:05.923894   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:35:05.951683   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:35:05.951718   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:35:05.996895   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:35:05.996931   94369 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:35:06.005814   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:35:06.005842   94369 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:35:06.090409   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:35:06.090445   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:35:06.152310   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:35:06.152353   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:35:06.262086   94369 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:35:06.262141   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:35:06.327942   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:35:06.334648   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:35:06.503690   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:35:06.503730   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:35:06.568045   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:35:06.808414   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:35:06.808448   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:35:07.250265   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:35:07.250309   94369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:35:07.446300   94369 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.078178099s)
	I1211 23:35:07.446333   94369 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:35:07.450296   94369 pod_ready.go:103] pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace has status "Ready":"False"
	I1211 23:35:07.731948   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:35:07.731972   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:35:07.998188   94369 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-021354" context rescaled to 1 replicas
	I1211 23:35:08.018806   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:35:08.018832   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:35:08.280221   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:35:08.280252   94369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:35:08.641703   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:35:09.604176   94369 pod_ready.go:103] pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace has status "Ready":"False"
	I1211 23:35:09.977101   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.497016735s)
	I1211 23:35:09.977170   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:09.977184   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:09.977542   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:09.977566   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:09.977582   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:09.977595   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:09.977860   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:09.977878   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:09.977897   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:12.083532   94369 pod_ready.go:93] pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.083620   94369 pod_ready.go:82] duration metric: took 6.64147023s for pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.083645   94369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zqjkl" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.127453   94369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:35:12.127497   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:12.130490   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.130885   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:12.130907   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.131143   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:12.131315   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:12.131512   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:12.131684   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:12.134407   94369 pod_ready.go:93] pod "coredns-7c65d6cfc9-zqjkl" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.134428   94369 pod_ready.go:82] duration metric: took 50.774481ms for pod "coredns-7c65d6cfc9-zqjkl" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.134442   94369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.147432   94369 pod_ready.go:93] pod "etcd-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.147458   94369 pod_ready.go:82] duration metric: took 13.00608ms for pod "etcd-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.147472   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.189516   94369 pod_ready.go:93] pod "kube-apiserver-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.189542   94369 pod_ready.go:82] duration metric: took 42.061425ms for pod "kube-apiserver-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.189556   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.199843   94369 pod_ready.go:93] pod "kube-controller-manager-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.199868   94369 pod_ready.go:82] duration metric: took 10.301991ms for pod "kube-controller-manager-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.199881   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nkpsm" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.347570   94369 pod_ready.go:93] pod "kube-proxy-nkpsm" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.347605   94369 pod_ready.go:82] duration metric: took 147.716679ms for pod "kube-proxy-nkpsm" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.347618   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.563095   94369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:35:12.629699   94369 addons.go:234] Setting addon gcp-auth=true in "addons-021354"
	I1211 23:35:12.629757   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:12.630063   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:12.630104   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:12.646065   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I1211 23:35:12.646526   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:12.647128   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:12.647158   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:12.647583   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:12.648291   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:12.648350   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:12.663711   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I1211 23:35:12.664159   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:12.664708   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:12.664740   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:12.665081   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:12.665313   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:12.667081   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:12.667314   94369 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:35:12.667346   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:12.670656   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.671141   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:12.671175   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.671333   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:12.671541   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:12.671742   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:12.671904   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:12.752620   94369 pod_ready.go:93] pod "kube-scheduler-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.752649   94369 pod_ready.go:82] duration metric: took 405.02273ms for pod "kube-scheduler-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.752660   94369 pod_ready.go:39] duration metric: took 7.32325392s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:35:12.752682   94369 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:35:12.752768   94369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:35:14.379115   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.884550202s)
	I1211 23:35:14.379196   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379211   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379212   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.854846519s)
	I1211 23:35:14.379262   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379286   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379311   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.831918968s)
	I1211 23:35:14.379343   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379353   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379416   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.797109646s)
	I1211 23:35:14.379450   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379457   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.749003138s)
	I1211 23:35:14.379464   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379474   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379482   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379425   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.767606537s)
	I1211 23:35:14.379515   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379524   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379558   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379574   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379582   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379585   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.633887714s)
	I1211 23:35:14.379615   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379616   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379635   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379648   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.631734559s)
	I1211 23:35:14.379670   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379678   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379692   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379700   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379589   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379723   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379726   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379734   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379741   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379711   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379759   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379765   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379773   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.455848228s)
	I1211 23:35:14.379748   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379793   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379802   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379854   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379883   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379891   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.051919528s)
	I1211 23:35:14.379893   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379904   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379906   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379910   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379913   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380008   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.045326593s)
	I1211 23:35:14.380034   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380042   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380169   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.81209319s)
	W1211 23:35:14.380197   94369 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:35:14.380219   94369 retry.go:31] will retry after 296.729862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:35:14.380272   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380281   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380289   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380296   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380351   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380371   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380377   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380384   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380390   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380426   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380443   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380449   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380456   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380462   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380499   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380517   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380523   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380530   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380536   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380575   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380590   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380606   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380612   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380619   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380625   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380662   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380670   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380680   94369 addons.go:475] Verifying addon ingress=true in "addons-021354"
	I1211 23:35:14.381529   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.381569   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.381576   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.381823   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.381854   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.381861   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.382532   94369 out.go:177] * Verifying ingress addon...
	I1211 23:35:14.383570   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.383609   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.383910   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.383943   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.383950   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.385408   94369 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:35:14.385469   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.385504   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.385511   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.385520   94369 addons.go:475] Verifying addon registry=true in "addons-021354"
	I1211 23:35:14.385905   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.385943   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.385950   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386161   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.386193   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386199   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386238   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.386269   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386276   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386283   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.386290   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.386508   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386517   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386527   94369 addons.go:475] Verifying addon metrics-server=true in "addons-021354"
	I1211 23:35:14.386531   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.386567   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386575   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386877   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386888   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386897   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.386903   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.387150   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.387163   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.387266   94369 out.go:177] * Verifying registry addon...
	I1211 23:35:14.389216   94369 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-021354 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:35:14.390012   94369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:35:14.436961   94369 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:35:14.436991   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:14.437304   94369 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:35:14.437320   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:14.470180   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.470209   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.470518   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.470539   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	W1211 23:35:14.470627   94369 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1211 23:35:14.483911   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.483932   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.484267   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.484290   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.484319   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.677595   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:35:14.896699   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:14.896949   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:15.280786   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.639011481s)
	I1211 23:35:15.280839   94369 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.528045224s)
	I1211 23:35:15.280879   94369 api_server.go:72] duration metric: took 10.289939885s to wait for apiserver process to appear ...
	I1211 23:35:15.280891   94369 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:35:15.280899   94369 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.613563089s)
	I1211 23:35:15.280909   94369 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I1211 23:35:15.280840   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:15.281131   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:15.281413   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:15.281430   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:15.281441   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:15.281448   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:15.282613   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:35:15.283181   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:15.283197   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:15.283212   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:15.283238   94369 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-021354"
	I1211 23:35:15.284745   94369 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:35:15.284771   94369 out.go:177] * Verifying csi-hostpath-driver addon...
	I1211 23:35:15.286013   94369 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:35:15.286039   94369 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:35:15.286727   94369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:35:15.325778   94369 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I1211 23:35:15.340309   94369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:35:15.340341   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:15.354720   94369 api_server.go:141] control plane version: v1.31.2
	I1211 23:35:15.354761   94369 api_server.go:131] duration metric: took 73.863036ms to wait for apiserver health ...
	I1211 23:35:15.354774   94369 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:35:15.394871   94369 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:35:15.394898   94369 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:35:15.398092   94369 system_pods.go:59] 19 kube-system pods found
	I1211 23:35:15.398141   94369 system_pods.go:61] "amd-gpu-device-plugin-bh5l6" [dcd97a68-2e6d-4f42-8c52-855402d21e6c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:35:15.398158   94369 system_pods.go:61] "coredns-7c65d6cfc9-ctjgq" [28d6a423-c466-4a36-add7-9401b3318dad] Running
	I1211 23:35:15.398166   94369 system_pods.go:61] "coredns-7c65d6cfc9-zqjkl" [0dede579-c7ea-4553-b6b2-23f2a38c1cee] Running
	I1211 23:35:15.398172   94369 system_pods.go:61] "csi-hostpath-attacher-0" [c83f1e10-78d8-4652-9020-50342da3a576] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:35:15.398185   94369 system_pods.go:61] "csi-hostpath-resizer-0" [563bb0d7-c97d-410a-ac13-e968cbe6809f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:35:15.398195   94369 system_pods.go:61] "csi-hostpathplugin-bp9w7" [3b465037-83b0-4363-a2e2-16ebd3d3ac4f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:35:15.398203   94369 system_pods.go:61] "etcd-addons-021354" [23ea386f-3e06-41b9-b355-6feed882a434] Running
	I1211 23:35:15.398212   94369 system_pods.go:61] "kube-apiserver-addons-021354" [d0fd5365-ac43-4603-aee1-2ec157d58452] Running
	I1211 23:35:15.398218   94369 system_pods.go:61] "kube-controller-manager-addons-021354" [5c9f0c46-e7ee-490a-984d-fd2e80d8831b] Running
	I1211 23:35:15.398229   94369 system_pods.go:61] "kube-ingress-dns-minikube" [27c99b66-f43b-4ba8-b1e3-e20458576994] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:35:15.398240   94369 system_pods.go:61] "kube-proxy-nkpsm" [168a41ed-f854-4453-9157-1d3e444d4185] Running
	I1211 23:35:15.398246   94369 system_pods.go:61] "kube-scheduler-addons-021354" [b3b35e0d-4e6d-46b1-b771-d31c478524a7] Running
	I1211 23:35:15.398258   94369 system_pods.go:61] "metrics-server-84c5f94fbc-v42nk" [277fa5bf-2781-493c-86a5-d170dc8b9237] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:35:15.398272   94369 system_pods.go:61] "nvidia-device-plugin-daemonset-9qfkl" [fb3a5825-e9dc-42d8-ba09-f0d94c314d72] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:35:15.398284   94369 system_pods.go:61] "registry-5cc95cd69-9rj9b" [0eebcfc6-7414-4613-bf0e-42a424a43722] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:35:15.398296   94369 system_pods.go:61] "registry-proxy-x2lv7" [8128c544-09f7-4769-85c1-30a0a916ca57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:35:15.398307   94369 system_pods.go:61] "snapshot-controller-56fcc65765-gfjfb" [c3966cdf-e310-4ffa-9d98-70eccaabb23b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.398391   94369 system_pods.go:61] "snapshot-controller-56fcc65765-w2qfk" [9a5f87de-b239-4076-baa2-e6e98f3e018b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.398415   94369 system_pods.go:61] "storage-provisioner" [86997c22-05b1-4987-b8ee-d1d7a36a0ddf] Running
	I1211 23:35:15.398427   94369 system_pods.go:74] duration metric: took 43.641817ms to wait for pod list to return data ...
	I1211 23:35:15.398436   94369 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:35:15.400400   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:15.414626   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:15.423705   94369 default_sa.go:45] found service account: "default"
	I1211 23:35:15.423733   94369 default_sa.go:55] duration metric: took 25.286742ms for default service account to be created ...
	I1211 23:35:15.423745   94369 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:35:15.436831   94369 system_pods.go:86] 19 kube-system pods found
	I1211 23:35:15.436862   94369 system_pods.go:89] "amd-gpu-device-plugin-bh5l6" [dcd97a68-2e6d-4f42-8c52-855402d21e6c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:35:15.436868   94369 system_pods.go:89] "coredns-7c65d6cfc9-ctjgq" [28d6a423-c466-4a36-add7-9401b3318dad] Running
	I1211 23:35:15.436876   94369 system_pods.go:89] "coredns-7c65d6cfc9-zqjkl" [0dede579-c7ea-4553-b6b2-23f2a38c1cee] Running
	I1211 23:35:15.436882   94369 system_pods.go:89] "csi-hostpath-attacher-0" [c83f1e10-78d8-4652-9020-50342da3a576] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:35:15.436887   94369 system_pods.go:89] "csi-hostpath-resizer-0" [563bb0d7-c97d-410a-ac13-e968cbe6809f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:35:15.436895   94369 system_pods.go:89] "csi-hostpathplugin-bp9w7" [3b465037-83b0-4363-a2e2-16ebd3d3ac4f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:35:15.436903   94369 system_pods.go:89] "etcd-addons-021354" [23ea386f-3e06-41b9-b355-6feed882a434] Running
	I1211 23:35:15.436908   94369 system_pods.go:89] "kube-apiserver-addons-021354" [d0fd5365-ac43-4603-aee1-2ec157d58452] Running
	I1211 23:35:15.436911   94369 system_pods.go:89] "kube-controller-manager-addons-021354" [5c9f0c46-e7ee-490a-984d-fd2e80d8831b] Running
	I1211 23:35:15.436922   94369 system_pods.go:89] "kube-ingress-dns-minikube" [27c99b66-f43b-4ba8-b1e3-e20458576994] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:35:15.436928   94369 system_pods.go:89] "kube-proxy-nkpsm" [168a41ed-f854-4453-9157-1d3e444d4185] Running
	I1211 23:35:15.436933   94369 system_pods.go:89] "kube-scheduler-addons-021354" [b3b35e0d-4e6d-46b1-b771-d31c478524a7] Running
	I1211 23:35:15.436940   94369 system_pods.go:89] "metrics-server-84c5f94fbc-v42nk" [277fa5bf-2781-493c-86a5-d170dc8b9237] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:35:15.436946   94369 system_pods.go:89] "nvidia-device-plugin-daemonset-9qfkl" [fb3a5825-e9dc-42d8-ba09-f0d94c314d72] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:35:15.436955   94369 system_pods.go:89] "registry-5cc95cd69-9rj9b" [0eebcfc6-7414-4613-bf0e-42a424a43722] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:35:15.436963   94369 system_pods.go:89] "registry-proxy-x2lv7" [8128c544-09f7-4769-85c1-30a0a916ca57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:35:15.436971   94369 system_pods.go:89] "snapshot-controller-56fcc65765-gfjfb" [c3966cdf-e310-4ffa-9d98-70eccaabb23b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.436979   94369 system_pods.go:89] "snapshot-controller-56fcc65765-w2qfk" [9a5f87de-b239-4076-baa2-e6e98f3e018b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.436983   94369 system_pods.go:89] "storage-provisioner" [86997c22-05b1-4987-b8ee-d1d7a36a0ddf] Running
	I1211 23:35:15.436994   94369 system_pods.go:126] duration metric: took 13.242421ms to wait for k8s-apps to be running ...
	I1211 23:35:15.437004   94369 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:35:15.437051   94369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:35:15.465899   94369 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:35:15.465919   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:35:15.540196   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:35:15.797763   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:15.892206   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:15.898772   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:16.291239   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:16.390224   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:16.393102   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:16.542040   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.864389048s)
	I1211 23:35:16.542112   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:16.542115   94369 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.105036482s)
	I1211 23:35:16.542150   94369 system_svc.go:56] duration metric: took 1.105140012s WaitForService to wait for kubelet
	I1211 23:35:16.542130   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:16.542168   94369 kubeadm.go:582] duration metric: took 11.551227162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:35:16.542198   94369 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:35:16.542552   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:16.542618   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:16.542637   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:16.542653   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:16.542666   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:16.542974   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:16.543002   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:16.543016   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:16.546203   94369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1211 23:35:16.546224   94369 node_conditions.go:123] node cpu capacity is 2
	I1211 23:35:16.546253   94369 node_conditions.go:105] duration metric: took 4.046611ms to run NodePressure ...
	I1211 23:35:16.546265   94369 start.go:241] waiting for startup goroutines ...
	I1211 23:35:16.791426   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:16.897876   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:16.898463   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:17.130012   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.589766234s)
	I1211 23:35:17.130077   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:17.130094   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:17.130433   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:17.130461   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:17.130472   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:17.130481   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:17.130479   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:17.130809   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:17.130885   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:17.130900   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:17.131955   94369 addons.go:475] Verifying addon gcp-auth=true in "addons-021354"
	I1211 23:35:17.134288   94369 out.go:177] * Verifying gcp-auth addon...
	I1211 23:35:17.136645   94369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:35:17.148167   94369 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:35:17.148195   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:17.296717   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:17.389771   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:17.393486   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:17.641057   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:17.799473   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:17.890621   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:17.894938   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:18.140149   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:18.291774   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:18.402707   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:18.406829   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:18.641332   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:18.791851   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:18.889792   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:18.892658   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:19.140502   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:19.291423   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:19.390641   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:19.395503   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:19.646217   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:19.790905   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:19.889910   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:19.893062   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:20.281966   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:20.292115   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:20.390165   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:20.393213   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:20.640626   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:20.792140   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:20.890031   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:20.893017   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:21.141337   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:21.291902   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:21.389888   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:21.393033   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:21.640753   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:21.792280   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:21.890611   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:21.893498   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:22.140680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:22.291960   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:22.389934   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:22.393077   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:22.641259   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:22.792239   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:22.890165   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:22.893313   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:23.140301   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:23.292138   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:23.389645   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:23.393944   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:23.640139   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:23.792102   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:23.891360   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:23.893272   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:24.143108   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:24.291362   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:24.389619   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:24.393994   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:24.641509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:24.792669   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:24.890110   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:24.892708   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:25.140984   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:25.292249   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:25.391372   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:25.392961   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:25.640410   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:25.791839   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:25.890522   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:25.893617   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:26.140986   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:26.291584   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:26.389889   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:26.393406   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:26.640892   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:26.792459   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:26.889713   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:26.894043   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:27.140183   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:27.291734   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:27.390017   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:27.393909   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:27.640660   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:27.792353   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:27.890220   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:27.892974   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:28.140655   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:28.292414   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:28.390486   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:28.393508   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:28.641272   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:28.792082   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:28.890496   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:28.893257   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:29.140389   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:29.290876   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:29.390266   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:29.393367   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:29.640680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:29.792250   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:29.890746   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:29.893991   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:30.141845   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:30.292648   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:30.389885   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:30.393465   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:30.645889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:30.794674   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:30.889690   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:30.894049   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:31.141562   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:31.292575   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:31.390054   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:31.393268   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:31.640287   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:31.791001   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:31.890239   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:31.892814   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:32.140546   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:32.291940   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:32.389582   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:32.394002   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:32.644728   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:32.791649   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:32.889764   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:32.893766   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:33.140158   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:33.291283   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:33.389782   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:33.393533   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:33.640691   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:33.792178   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:33.890169   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:33.892984   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:34.140333   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:34.292959   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:34.389974   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:34.392883   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:34.639902   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:34.980731   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:34.981504   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:34.981775   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:35.140332   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:35.291378   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:35.389124   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:35.393141   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:35.640637   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:35.791997   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:35.890043   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:35.892981   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:36.139812   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:36.293332   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:36.390374   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:36.393334   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:36.640542   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:36.792446   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:36.890206   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:36.892768   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:37.141484   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:37.291502   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:37.389634   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:37.394212   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:37.640132   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:37.791652   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:37.890223   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:37.892971   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:38.140671   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:38.292406   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:38.391019   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:38.392893   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:38.641253   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:38.824725   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:39.144257   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:39.147509   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:39.147563   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:39.291979   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:39.393336   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:39.394049   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:39.639830   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:39.792106   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:39.890010   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:39.893040   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:40.139797   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:40.292281   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:40.389329   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:40.393669   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:40.640812   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:40.792017   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:40.890369   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:40.893008   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:41.139823   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:41.293979   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:41.390046   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:41.392885   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:41.640274   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:41.791269   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:41.892599   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:41.894014   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:42.140620   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:42.292401   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:42.389246   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:42.393889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:42.640015   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:42.791153   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:42.891219   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:42.894388   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:43.139957   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:43.291682   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:43.391024   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:43.491538   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:43.640783   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:43.792280   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:43.890416   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:43.893203   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:44.140786   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:44.293014   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:44.390444   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:44.393079   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:44.640616   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:44.792229   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:44.891510   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:44.892929   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:45.140829   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:45.294311   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:45.391389   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:45.394004   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:45.640691   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:45.792680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:45.890007   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:45.893307   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:46.140882   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:46.292545   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:46.389749   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:46.393548   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:46.641214   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:46.791906   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:46.890366   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:46.893300   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:47.140895   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:47.293414   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:47.389415   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:47.393463   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:47.640550   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:47.791502   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:47.889795   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:47.892858   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:48.140837   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:48.292371   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:48.390389   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:48.393147   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:48.640680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:48.793014   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:48.890982   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:48.893879   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:49.140164   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:49.292071   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:49.390606   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:49.394106   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:49.640598   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:49.792352   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:49.890996   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:49.893723   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:50.141566   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:50.292331   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:50.390337   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:50.393558   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:50.641275   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:50.791119   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:50.891423   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:50.895690   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:51.141495   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:51.291656   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:51.389803   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:51.393066   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:51.640865   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:51.791686   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:51.889695   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:51.893839   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:52.142023   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:52.291873   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:52.389890   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:52.393978   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:52.640952   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:52.793098   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:52.896085   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:52.897096   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:53.141117   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:53.291506   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:53.389879   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:53.392871   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:53.640254   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:53.791240   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:53.891453   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:53.893410   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:54.140189   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:54.293468   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:54.389575   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:54.393676   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:54.641098   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:54.792384   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:54.889483   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:54.894615   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:55.140700   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:55.291677   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:55.389695   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:55.394091   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:55.640249   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:55.791268   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:55.890019   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:55.893993   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:56.140143   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:56.291899   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:56.389372   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:56.393487   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:56.641066   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:56.791906   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:56.891472   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:56.893111   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:57.140454   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:57.292200   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:57.390297   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:57.393198   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:57.639925   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:57.791696   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:57.889786   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:57.892874   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:58.140937   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:58.311619   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:58.389776   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:58.394079   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:58.640349   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:58.791822   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:58.890434   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:58.893766   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:59.140688   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:59.292520   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:59.389285   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:59.393354   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:59.640560   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:59.792042   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:59.890630   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:59.893683   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:00.141004   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:00.291829   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:00.390030   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:00.392997   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:00.640974   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:00.791317   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:00.890096   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:00.893456   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:01.141025   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:01.291167   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:01.390591   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:01.393286   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:01.640304   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:01.791994   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:01.890526   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:01.893252   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:02.141456   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:02.292100   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:02.390578   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:02.395016   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:02.639965   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:02.791051   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:02.889897   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:02.893689   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:03.140362   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:03.291141   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:03.390576   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:03.393256   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:03.640236   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:03.791889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:03.891690   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:03.893328   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:04.140326   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:04.291543   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:04.389938   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:04.393099   94369 kapi.go:107] duration metric: took 50.00308353s to wait for kubernetes.io/minikube-addons=registry ...
	I1211 23:36:04.640167   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:04.791893   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:04.890555   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:05.141320   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:05.293297   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:05.390771   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:05.640877   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:05.791933   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:05.890151   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:06.140707   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:06.292454   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:06.389656   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:06.640800   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:06.793052   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:06.889489   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:07.140567   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:07.291458   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:07.390893   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:07.640889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:07.792286   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:07.889898   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:08.140859   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:08.292165   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:08.390078   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:08.639874   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:08.792813   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:08.889834   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:09.141054   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:09.291158   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:09.391512   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:09.640319   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:09.791795   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:09.889988   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:10.141518   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:10.291973   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:10.390321   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:10.640315   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:10.791292   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:10.889099   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:11.141039   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:11.293720   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:11.392592   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:11.641377   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:11.791548   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:11.890652   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:12.140554   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:12.292032   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:12.390670   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:12.641085   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:12.792129   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:12.891790   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:13.140278   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:13.291685   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:13.392295   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:13.640504   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:13.791440   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:13.890473   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:14.140348   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:14.291183   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:14.390245   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:14.640205   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:14.791452   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:14.890360   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:15.141910   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:15.293573   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:15.392504   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:15.641811   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:15.792427   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:15.891074   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:16.140688   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:16.291848   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:16.390284   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:16.640713   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:16.797129   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:16.891607   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:17.140402   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:17.291501   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:17.399328   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:17.641574   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:17.792052   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:17.889600   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:18.140136   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:18.292116   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:18.390082   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:18.641473   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:18.791963   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:18.890214   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:19.141001   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:19.291247   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:19.389980   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:19.641695   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:19.792254   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:19.891513   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:20.141069   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:20.291857   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:20.390239   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:20.640543   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:20.792762   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:20.889633   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:21.140084   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:21.291660   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:21.389864   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:21.641426   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:21.792077   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:22.059580   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:22.256481   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:22.291860   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:22.389722   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:22.640176   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:22.791514   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:22.892757   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:23.140443   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:23.291863   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:23.391239   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:23.640072   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:23.795723   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:23.889084   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:24.140793   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:24.293856   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:24.395752   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:24.640726   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:24.792721   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:24.890590   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:25.140765   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:25.292263   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:25.389459   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:25.640492   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:25.791857   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:25.891095   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:26.140920   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:26.292387   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:26.389681   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:26.640509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:26.792039   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:26.891910   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:27.140967   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:27.292600   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:27.390244   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:27.642276   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:27.792424   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:27.889292   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:28.140547   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:28.291406   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:28.389850   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:28.640364   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:28.830381   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:28.890616   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:29.140646   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:29.291617   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:29.393924   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:29.647979   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:29.792059   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:29.893795   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:30.140872   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:30.291904   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:30.390932   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:30.640415   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:30.794354   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:30.890061   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:31.140631   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:31.292125   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:31.391893   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:31.640735   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:31.791658   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:31.895704   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:32.146008   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:32.299451   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:32.392214   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:32.640856   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:32.792492   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:32.891324   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:33.140646   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:33.292170   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:33.390714   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:33.643873   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:33.792106   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:33.890534   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:34.140708   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:34.292090   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:34.390292   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:34.886360   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:34.887041   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:34.894285   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:35.140541   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:35.292208   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:35.391113   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:35.643970   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:35.791555   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:35.897238   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:36.141594   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:36.292467   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:36.392596   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:36.640789   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:36.792230   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:36.889546   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:37.141081   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:37.291200   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:37.393733   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:37.640413   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:37.791802   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:37.890379   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:38.140928   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:38.291827   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:38.391197   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:38.640017   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:38.791519   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:38.889959   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:39.140962   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:39.290890   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:39.390236   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:39.641894   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:39.792451   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:39.889931   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:40.141242   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:40.291804   94369 kapi.go:107] duration metric: took 1m25.005072108s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1211 23:36:40.390182   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:40.641540   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:40.890750   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:41.140735   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:41.390060   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:41.640682   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:41.891713   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:42.140064   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:42.390520   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:42.640356   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:42.890731   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:43.140509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:43.391018   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:43.641346   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:43.890889   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:44.141020   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:44.390652   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:44.640498   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:44.890882   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:45.143241   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:45.391186   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:45.641859   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:45.890204   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:46.139961   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:46.390625   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:46.640285   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:46.892020   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:47.141145   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:47.390829   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:47.641204   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:47.890875   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:48.140306   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:48.390059   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:48.641858   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:48.890289   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:49.140037   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:49.390527   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:49.641264   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:49.890725   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:50.140328   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:50.391426   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:50.640267   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:50.890334   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:51.139941   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:51.390939   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:51.640798   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:51.889872   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:52.140765   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:52.390817   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:52.640673   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:52.889837   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:53.141153   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:53.390687   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:53.641011   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:53.893854   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:54.141148   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:54.390679   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:54.640544   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:54.890966   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:55.140719   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:55.390276   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:55.639742   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:55.889873   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:56.141924   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:56.392036   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:56.640734   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:56.889752   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:57.140546   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:57.391136   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:57.640663   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:57.889930   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:58.140630   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:58.389747   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:58.640560   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:58.892325   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:59.141089   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:59.390842   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:59.642171   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:59.890843   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:00.140695   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:00.390037   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:00.641627   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:00.889509   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:01.140843   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:01.390298   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:01.640469   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:01.891017   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:02.140494   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:02.389730   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:02.640706   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:02.890645   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:03.141509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:03.390746   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:03.641122   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:03.890652   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:04.141131   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:04.390131   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:04.640366   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:04.890145   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:05.140570   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:05.390767   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:05.641066   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:05.890458   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:06.140013   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:06.392343   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:06.641382   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:06.890566   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:07.140348   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:07.390711   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:07.640557   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:07.890898   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:08.141236   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:08.390578   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:08.640377   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:08.890844   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:09.140851   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:09.389789   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:09.641553   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:09.889600   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:10.140529   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:10.390668   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:10.640449   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:11.202316   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:11.202874   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:11.390707   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:11.641387   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:11.891574   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:12.141446   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:12.390990   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:12.640012   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:12.890133   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:13.141101   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:13.390323   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:13.640102   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:13.890719   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:14.140572   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:14.391117   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:14.640622   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:14.889956   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:15.140966   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:15.390477   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:15.640618   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:15.890888   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:16.140552   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:16.390485   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:16.640117   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:16.890540   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:17.140560   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:17.390736   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:17.640287   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:17.891143   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:18.140392   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:18.390499   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:18.641268   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:18.891086   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:19.141144   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:19.390376   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:19.640430   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:19.890926   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:20.141104   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:20.390280   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:20.640037   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:20.890396   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:21.140520   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:21.391058   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:21.641678   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:21.891475   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:22.140656   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:22.393665   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:22.640687   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:22.889710   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:23.139720   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:23.389730   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:23.640385   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:23.890890   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:24.140717   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:24.389880   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:24.641213   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:24.891481   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:25.140464   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:25.391527   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:25.641375   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:25.890954   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:26.140777   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:26.390656   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:26.640860   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:26.890447   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:27.140492   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:27.390573   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:27.640652   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:27.890005   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:28.141630   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:28.389859   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:28.641472   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:28.891387   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:29.140306   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:29.390860   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:29.640897   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:29.891968   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:30.141790   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:30.390547   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:30.640453   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:30.890831   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:31.140324   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:31.390591   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:31.641264   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:31.893895   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:32.141900   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:32.389830   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:32.640020   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:32.891240   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:33.140666   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:33.389775   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:33.640229   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:34.202289   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:34.202458   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:34.391027   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:34.640298   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:34.890655   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:35.140584   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:35.391795   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:35.642295   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:35.891064   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:36.140526   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:36.794507   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:36.794932   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:36.892633   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:37.141420   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:37.391327   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:37.640396   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:37.890912   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:38.141199   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:38.390699   94369 kapi.go:107] duration metric: took 2m24.0052881s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1211 23:37:38.640703   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:39.140754   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:39.640012   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:40.140315   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:40.642150   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:41.142085   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:41.640257   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:42.141418   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:42.642242   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:43.140821   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:43.640059   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:44.140524   94369 kapi.go:107] duration metric: took 2m27.003897373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1211 23:37:44.142133   94369 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-021354 cluster.
	I1211 23:37:44.143501   94369 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1211 23:37:44.144831   94369 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1211 23:37:44.146271   94369 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1211 23:37:44.147493   94369 addons.go:510] duration metric: took 2m39.156498942s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1211 23:37:44.147533   94369 start.go:246] waiting for cluster config update ...
	I1211 23:37:44.147556   94369 start.go:255] writing updated cluster config ...
	I1211 23:37:44.147878   94369 ssh_runner.go:195] Run: rm -f paused
	I1211 23:37:44.205326   94369 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1211 23:37:44.207060   94369 out.go:177] * Done! kubectl is now configured to use "addons-021354" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.097101545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960466097076561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0896f0d3-18d3-41d0-bcf7-91f27754c631 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.097868400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b572694-c3cb-408b-878d-158311fe22d8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.097928473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b572694-c3cb-408b-878d-158311fe22d8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.098291353Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a59b877c4bbe839a71f924c292fe5daa61a79c225c8816421a436edd480049,PodSandboxId:b8ff152b9e56208ae1ce62148610bac9717ba2c3ef9a38a957ae91c707a9434f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733960256941630574,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-sppm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64946dcb-a436-40bf-9874-c98268f54e0b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32e0dd03039398181c6c08e8afc1947acd79cf438b44344a76e551ba93149ee5,PodSandboxId:eaf05ad8497a2e8edc78ca9c62e3a2fba102394274dce4ef9b6a8fe49d751359,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733960187585361399,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xpv5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c328bad3-f5ee-47e6-a1b4-b017d697bfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111bd32323a96535bb916f59e0edf3665c02efdf89e870723ebaee933a5bdc1c,PodSandboxId:60a8f943a7f822eb179b0b2a576040585f493bae73b120ef6c30360049ab3662,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733960186812131346,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7mc9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fa7a14e0-5ab1-485a-b9c3-9ea401dfc97e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provision
er@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attem
pt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972c2b446af97ea8b96ef7b81458c15d3b68dab33c0dd2b84ad6b1b9d494d7b4,PodSandboxId:c5969a285cd896802ac616e0c013221db5261d50cc86c90e3ae007b182925cd3,Metadata:&ContainerM
etadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733960122897994813,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c99b66-f43b-4ba8-b1e3-e20458576994,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c2
6748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a
14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733960110191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b572694-c3cb-408b-878d-158311fe22d8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.137123698Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b23a30f-6f0a-43aa-944b-7bd95c5aad96 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.137379045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b23a30f-6f0a-43aa-944b-7bd95c5aad96 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.138187852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87f1ba3f-df11-4f28-af6f-395289538721 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.140100561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960466140070360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87f1ba3f-df11-4f28-af6f-395289538721 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.140638195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98d40fa3-45d6-4f66-8a01-026818d70962 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.140691511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98d40fa3-45d6-4f66-8a01-026818d70962 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.141038578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a59b877c4bbe839a71f924c292fe5daa61a79c225c8816421a436edd480049,PodSandboxId:b8ff152b9e56208ae1ce62148610bac9717ba2c3ef9a38a957ae91c707a9434f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733960256941630574,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-sppm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64946dcb-a436-40bf-9874-c98268f54e0b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32e0dd03039398181c6c08e8afc1947acd79cf438b44344a76e551ba93149ee5,PodSandboxId:eaf05ad8497a2e8edc78ca9c62e3a2fba102394274dce4ef9b6a8fe49d751359,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733960187585361399,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xpv5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c328bad3-f5ee-47e6-a1b4-b017d697bfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111bd32323a96535bb916f59e0edf3665c02efdf89e870723ebaee933a5bdc1c,PodSandboxId:60a8f943a7f822eb179b0b2a576040585f493bae73b120ef6c30360049ab3662,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733960186812131346,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7mc9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fa7a14e0-5ab1-485a-b9c3-9ea401dfc97e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provision
er@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attem
pt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972c2b446af97ea8b96ef7b81458c15d3b68dab33c0dd2b84ad6b1b9d494d7b4,PodSandboxId:c5969a285cd896802ac616e0c013221db5261d50cc86c90e3ae007b182925cd3,Metadata:&ContainerM
etadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733960122897994813,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c99b66-f43b-4ba8-b1e3-e20458576994,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c2
6748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a
14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733960110191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98d40fa3-45d6-4f66-8a01-026818d70962 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.176613735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f4880e1-fafd-4d25-9972-0af4244baf19 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.176687850Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f4880e1-fafd-4d25-9972-0af4244baf19 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.178695806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20eb88b7-9196-45c9-98f7-47c2f3d1db26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.180122563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960466180097079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20eb88b7-9196-45c9-98f7-47c2f3d1db26 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.180845066Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=081b22ae-3e88-44b8-bd90-d97fa85cc101 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.180896768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=081b22ae-3e88-44b8-bd90-d97fa85cc101 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.181398258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a59b877c4bbe839a71f924c292fe5daa61a79c225c8816421a436edd480049,PodSandboxId:b8ff152b9e56208ae1ce62148610bac9717ba2c3ef9a38a957ae91c707a9434f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733960256941630574,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-sppm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64946dcb-a436-40bf-9874-c98268f54e0b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32e0dd03039398181c6c08e8afc1947acd79cf438b44344a76e551ba93149ee5,PodSandboxId:eaf05ad8497a2e8edc78ca9c62e3a2fba102394274dce4ef9b6a8fe49d751359,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733960187585361399,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xpv5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c328bad3-f5ee-47e6-a1b4-b017d697bfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111bd32323a96535bb916f59e0edf3665c02efdf89e870723ebaee933a5bdc1c,PodSandboxId:60a8f943a7f822eb179b0b2a576040585f493bae73b120ef6c30360049ab3662,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733960186812131346,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7mc9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fa7a14e0-5ab1-485a-b9c3-9ea401dfc97e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provision
er@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attem
pt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972c2b446af97ea8b96ef7b81458c15d3b68dab33c0dd2b84ad6b1b9d494d7b4,PodSandboxId:c5969a285cd896802ac616e0c013221db5261d50cc86c90e3ae007b182925cd3,Metadata:&ContainerM
etadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733960122897994813,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c99b66-f43b-4ba8-b1e3-e20458576994,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c2
6748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a
14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733960110191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=081b22ae-3e88-44b8-bd90-d97fa85cc101 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.220046250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82903abf-6473-4e89-88ac-1d3584bf464e name=/runtime.v1.RuntimeService/Version
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.220119263Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82903abf-6473-4e89-88ac-1d3584bf464e name=/runtime.v1.RuntimeService/Version
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.221447895Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7708c1f-5bd5-4c1d-acf7-a4bf157f06d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.222910519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960466222884891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7708c1f-5bd5-4c1d-acf7-a4bf157f06d7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.223533134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa456040-db4b-4221-b91a-a6489e70038e name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.223591591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa456040-db4b-4221-b91a-a6489e70038e name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:41:06 addons-021354 crio[659]: time="2024-12-11 23:41:06.224005301Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2a59b877c4bbe839a71f924c292fe5daa61a79c225c8816421a436edd480049,PodSandboxId:b8ff152b9e56208ae1ce62148610bac9717ba2c3ef9a38a957ae91c707a9434f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733960256941630574,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-sppm8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 64946dcb-a436-40bf-9874-c98268f54e0b,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:32e0dd03039398181c6c08e8afc1947acd79cf438b44344a76e551ba93149ee5,PodSandboxId:eaf05ad8497a2e8edc78ca9c62e3a2fba102394274dce4ef9b6a8fe49d751359,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733960187585361399,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-7xpv5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c328bad3-f5ee-47e6-a1b4-b017d697bfd7,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:111bd32323a96535bb916f59e0edf3665c02efdf89e870723ebaee933a5bdc1c,PodSandboxId:60a8f943a7f822eb179b0b2a576040585f493bae73b120ef6c30360049ab3662,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733960186812131346,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s7mc9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fa7a14e0-5ab1-485a-b9c3-9ea401dfc97e,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provision
er@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attem
pt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972c2b446af97ea8b96ef7b81458c15d3b68dab33c0dd2b84ad6b1b9d494d7b4,PodSandboxId:c5969a285cd896802ac616e0c013221db5261d50cc86c90e3ae007b182925cd3,Metadata:&ContainerM
etadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733960122897994813,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27c99b66-f43b-4ba8-b1e3-e20458576994,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c2
6748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a
14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733960110191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.te
rminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa456040-db4b-4221-b91a-a6489e70038e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c8a7e2acd726       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   f44fc51622f81       nginx
	d920673a1e830       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   fc07691516505       busybox
	f2a59b877c4bb       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   b8ff152b9e562       ingress-nginx-controller-5f85ff4588-sppm8
	32e0dd0303939       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             4 minutes ago       Exited              patch                     1                   eaf05ad8497a2       ingress-nginx-admission-patch-7xpv5
	111bd32323a96       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   4 minutes ago       Exited              create                    0                   60a8f943a7f82       ingress-nginx-admission-create-s7mc9
	79e8144c96f2b       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   6541effb8bb1c       metrics-server-84c5f94fbc-v42nk
	a72f42e28b71f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   11067e9e56f14       local-path-provisioner-86d989889c-4rzfr
	6860aab3fff86       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   88896cc53547e       amd-gpu-device-plugin-bh5l6
	972c2b446af97       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             5 minutes ago       Running             minikube-ingress-dns      0                   c5969a285cd89       kube-ingress-dns-minikube
	40b1897a55f24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   691522df16e02       storage-provisioner
	7924fdda27f8c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             5 minutes ago       Running             coredns                   0                   636e68fcc4a0e       coredns-7c65d6cfc9-ctjgq
	bd4a1fabc2629       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             5 minutes ago       Running             kube-proxy                0                   2ac84efc2c5fc       kube-proxy-nkpsm
	f757cdb5508ff       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             6 minutes ago       Running             kube-apiserver            0                   e876b66ef7de3       kube-apiserver-addons-021354
	579175421d814       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             6 minutes ago       Running             kube-scheduler            0                   24140bc84f0a5       kube-scheduler-addons-021354
	de7d5e893bc1b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             6 minutes ago       Running             etcd                      0                   be44b1e012d09       etcd-addons-021354
	0b96198079cab       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             6 minutes ago       Running             kube-controller-manager   0                   b14a081a3d9d8       kube-controller-manager-addons-021354
	
	
	==> coredns [7924fdda27f8c583d2adee7f082d8eb20ec95a14a88aa651b8ff3bf14a270bd3] <==
	[INFO] 10.244.0.7:33655 - 53766 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000111715s
	[INFO] 10.244.0.7:33655 - 33591 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000171997s
	[INFO] 10.244.0.7:33655 - 55585 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000079089s
	[INFO] 10.244.0.7:33655 - 42200 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000079013s
	[INFO] 10.244.0.7:33655 - 59708 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000473603s
	[INFO] 10.244.0.7:33655 - 65414 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000143718s
	[INFO] 10.244.0.7:33655 - 57852 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145977s
	[INFO] 10.244.0.7:37465 - 35968 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102813s
	[INFO] 10.244.0.7:37465 - 35692 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000037548s
	[INFO] 10.244.0.7:54525 - 51715 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060646s
	[INFO] 10.244.0.7:54525 - 51498 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047236s
	[INFO] 10.244.0.7:45456 - 49491 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056748s
	[INFO] 10.244.0.7:45456 - 49259 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042536s
	[INFO] 10.244.0.7:55243 - 11992 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120519s
	[INFO] 10.244.0.7:55243 - 12156 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00028888s
	[INFO] 10.244.0.23:35387 - 13515 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000640533s
	[INFO] 10.244.0.23:47438 - 27167 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000146644s
	[INFO] 10.244.0.23:36102 - 1776 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145659s
	[INFO] 10.244.0.23:48085 - 34108 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138363s
	[INFO] 10.244.0.23:40347 - 14441 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000122964s
	[INFO] 10.244.0.23:39457 - 62021 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132381s
	[INFO] 10.244.0.23:52964 - 33855 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 154 0.003817103s
	[INFO] 10.244.0.23:39099 - 10275 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 190 0.006148878s
	[INFO] 10.244.0.27:53779 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000380615s
	[INFO] 10.244.0.27:45607 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000091467s
	
	
	==> describe nodes <==
	Name:               addons-021354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-021354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=addons-021354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_35_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-021354
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:34:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-021354
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Dec 2024 23:40:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Dec 2024 23:39:05 +0000   Wed, 11 Dec 2024 23:34:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Dec 2024 23:39:05 +0000   Wed, 11 Dec 2024 23:34:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Dec 2024 23:39:05 +0000   Wed, 11 Dec 2024 23:34:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Dec 2024 23:39:05 +0000   Wed, 11 Dec 2024 23:35:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    addons-021354
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2313c7905d7240539c21a58738545990
	  System UUID:                2313c790-5d72-4053-9c21-a58738545990
	  Boot ID:                    4b52f08d-7f6b-4c06-8e3f-51e5db38dc4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  default                     hello-world-app-55bf9c44b4-8b2cl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-sppm8    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m52s
	  kube-system                 amd-gpu-device-plugin-bh5l6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 coredns-7c65d6cfc9-ctjgq                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     6m2s
	  kube-system                 etcd-addons-021354                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m7s
	  kube-system                 kube-apiserver-addons-021354                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-021354        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-nkpsm                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-021354                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-v42nk              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         5m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-4rzfr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m56s  kube-proxy       
	  Normal  Starting                 6m7s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m7s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m7s   kubelet          Node addons-021354 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s   kubelet          Node addons-021354 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s   kubelet          Node addons-021354 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m6s   kubelet          Node addons-021354 status is now: NodeReady
	  Normal  RegisteredNode           6m3s   node-controller  Node addons-021354 event: Registered Node addons-021354 in Controller
	
	
	==> dmesg <==
	[  +0.057127] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.477393] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.075728] kauditd_printk_skb: 69 callbacks suppressed
	[Dec11 23:35] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.151719] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.004173] kauditd_printk_skb: 91 callbacks suppressed
	[  +5.230299] kauditd_printk_skb: 161 callbacks suppressed
	[  +7.444894] kauditd_printk_skb: 74 callbacks suppressed
	[Dec11 23:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +19.180892] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.701789] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.221573] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.762541] kauditd_printk_skb: 23 callbacks suppressed
	[Dec11 23:37] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.547293] kauditd_printk_skb: 9 callbacks suppressed
	[Dec11 23:38] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.687421] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.778038] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.066388] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.055297] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.608612] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.177161] kauditd_printk_skb: 19 callbacks suppressed
	[ +12.732427] kauditd_printk_skb: 2 callbacks suppressed
	[Dec11 23:39] kauditd_printk_skb: 7 callbacks suppressed
	[Dec11 23:41] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672] <==
	{"level":"warn","ts":"2024-12-11T23:37:36.758671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"466.976381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-12-11T23:37:36.760408Z","caller":"traceutil/trace.go:171","msg":"trace[2113477679] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1195; }","duration":"467.687272ms","start":"2024-12-11T23:37:36.291675Z","end":"2024-12-11T23:37:36.759363Z","steps":["trace[2113477679] 'range keys from in-memory index tree'  (duration: 466.866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:37:36.760559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:37:36.291640Z","time spent":"468.903616ms","remote":"127.0.0.1:37384","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2024-12-11T23:37:36.759194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.729286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:37:36.760835Z","caller":"traceutil/trace.go:171","msg":"trace[683057435] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1195; }","duration":"400.378347ms","start":"2024-12-11T23:37:36.360448Z","end":"2024-12-11T23:37:36.760826Z","steps":["trace[683057435] 'range keys from in-memory index tree'  (duration: 398.676873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:37:36.760899Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:37:36.360413Z","time spent":"400.445579ms","remote":"127.0.0.1:37308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-11T23:37:36.761344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.436379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:37:36.762717Z","caller":"traceutil/trace.go:171","msg":"trace[672734179] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1195; }","duration":"150.807428ms","start":"2024-12-11T23:37:36.611900Z","end":"2024-12-11T23:37:36.762708Z","steps":["trace[672734179] 'range keys from in-memory index tree'  (duration: 149.396911ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:38:14.105535Z","caller":"traceutil/trace.go:171","msg":"trace[438081125] linearizableReadLoop","detail":"{readStateIndex:1434; appliedIndex:1433; }","duration":"183.133847ms","start":"2024-12-11T23:38:13.922378Z","end":"2024-12-11T23:38:14.105512Z","steps":["trace[438081125] 'read index received'  (duration: 182.94168ms)","trace[438081125] 'applied index is now lower than readState.Index'  (duration: 191.714µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:38:14.105948Z","caller":"traceutil/trace.go:171","msg":"trace[1415719321] transaction","detail":"{read_only:false; response_revision:1376; number_of_response:1; }","duration":"218.505569ms","start":"2024-12-11T23:38:13.887431Z","end":"2024-12-11T23:38:14.105937Z","steps":["trace[1415719321] 'process raft request'  (duration: 217.92654ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:14.106100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.721128ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:38:14.106124Z","caller":"traceutil/trace.go:171","msg":"trace[385756639] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1376; }","duration":"183.757634ms","start":"2024-12-11T23:38:13.922356Z","end":"2024-12-11T23:38:14.106114Z","steps":["trace[385756639] 'agreement among raft nodes before linearized reading'  (duration: 183.700343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:14.107252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.71252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:38:14.107284Z","caller":"traceutil/trace.go:171","msg":"trace[1205703228] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1376; }","duration":"152.796306ms","start":"2024-12-11T23:38:13.954480Z","end":"2024-12-11T23:38:14.107276Z","steps":["trace[1205703228] 'agreement among raft nodes before linearized reading'  (duration: 152.690427ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:38:32.036820Z","caller":"traceutil/trace.go:171","msg":"trace[711663538] linearizableReadLoop","detail":"{readStateIndex:1570; appliedIndex:1569; }","duration":"305.044897ms","start":"2024-12-11T23:38:31.731763Z","end":"2024-12-11T23:38:32.036808Z","steps":["trace[711663538] 'read index received'  (duration: 304.870642ms)","trace[711663538] 'applied index is now lower than readState.Index'  (duration: 173.748µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:38:32.037099Z","caller":"traceutil/trace.go:171","msg":"trace[1440514323] transaction","detail":"{read_only:false; response_revision:1507; number_of_response:1; }","duration":"315.50833ms","start":"2024-12-11T23:38:31.721580Z","end":"2024-12-11T23:38:32.037089Z","steps":["trace[1440514323] 'process raft request'  (duration: 315.13463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:32.037285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:38:31.721563Z","time spent":"315.601747ms","remote":"127.0.0.1:37308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3606,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:1505 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:3560 >> failure:<request_range:<key:\"/registry/pods/default/test-local-path\" > >"}
	{"level":"warn","ts":"2024-12-11T23:38:32.037491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.724566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:1 size:3621"}
	{"level":"info","ts":"2024-12-11T23:38:32.037532Z","caller":"traceutil/trace.go:171","msg":"trace[785189000] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:1; response_revision:1507; }","duration":"305.765692ms","start":"2024-12-11T23:38:31.731758Z","end":"2024-12-11T23:38:32.037524Z","steps":["trace[785189000] 'agreement among raft nodes before linearized reading'  (duration: 305.701523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:32.037554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:38:31.731718Z","time spent":"305.830284ms","remote":"127.0.0.1:37308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3644,"request content":"key:\"/registry/pods/default/test-local-path\" "}
	{"level":"warn","ts":"2024-12-11T23:38:32.037677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.391983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-12-11T23:38:32.037730Z","caller":"traceutil/trace.go:171","msg":"trace[405331215] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1507; }","duration":"157.452538ms","start":"2024-12-11T23:38:31.880269Z","end":"2024-12-11T23:38:32.037722Z","steps":["trace[405331215] 'agreement among raft nodes before linearized reading'  (duration: 157.315212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:32.037738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.61111ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:38:32.038775Z","caller":"traceutil/trace.go:171","msg":"trace[1171929756] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1507; }","duration":"116.643367ms","start":"2024-12-11T23:38:31.922121Z","end":"2024-12-11T23:38:32.038765Z","steps":["trace[1171929756] 'agreement among raft nodes before linearized reading'  (duration: 115.603584ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:38:57.386137Z","caller":"traceutil/trace.go:171","msg":"trace[600661065] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"286.39895ms","start":"2024-12-11T23:38:57.099721Z","end":"2024-12-11T23:38:57.386120Z","steps":["trace[600661065] 'process raft request'  (duration: 286.287039ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:41:06 up 6 min,  0 users,  load average: 0.38, 1.18, 0.69
	Linux addons-021354 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca] <==
	E1211 23:37:22.307646       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.104.253:443: connect: connection refused" logger="UnhandledError"
	E1211 23:37:22.318549       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.104.253:443: connect: connection refused" logger="UnhandledError"
	I1211 23:37:22.400583       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1211 23:37:59.727102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.225:8443->192.168.39.1:34578: use of closed network connection
	E1211 23:37:59.939714       1 conn.go:339] Error on socket receive: read tcp 192.168.39.225:8443->192.168.39.1:34602: use of closed network connection
	I1211 23:38:09.091339       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.100.164"}
	I1211 23:38:38.114044       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1211 23:38:39.153376       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1211 23:38:40.163757       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1211 23:38:40.326516       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.241.176"}
	I1211 23:39:05.628974       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1211 23:39:29.120535       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.120783       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.140504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.140559       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.173737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.173834       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.184438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.184546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.206611       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.206665       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1211 23:39:30.177014       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1211 23:39:30.207480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1211 23:39:30.235114       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1211 23:41:05.030200       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.105.125"}
	
	
	==> kube-controller-manager [0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a] <==
	E1211 23:39:45.538178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:39:49.450157       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:39:49.450351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:39:49.453668       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:39:49.454365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:39:50.017270       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:39:50.017378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:06.500809       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:06.500862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:14.054276       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:14.054415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:14.198149       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:14.198273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:22.860468       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:22.860522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:41.315783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:41.315965       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:45.937808       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:45.937867       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:40:52.857957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:40:52.858049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1211 23:41:04.865624       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.541322ms"
	I1211 23:41:04.897712       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.980702ms"
	I1211 23:41:04.897801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="41.833µs"
	I1211 23:41:04.904109       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.075µs"
	
	
	==> kube-proxy [bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1211 23:35:09.464101       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1211 23:35:09.588258       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	E1211 23:35:09.588355       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:35:10.096781       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1211 23:35:10.096824       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:35:10.096856       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:35:10.105408       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:35:10.105734       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:35:10.105747       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:35:10.109696       1 config.go:199] "Starting service config controller"
	I1211 23:35:10.109737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:35:10.109772       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:35:10.109779       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:35:10.121806       1 config.go:328] "Starting node config controller"
	I1211 23:35:10.121822       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:35:10.210106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:35:10.210164       1 shared_informer.go:320] Caches are synced for service config
	I1211 23:35:10.222466       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde] <==
	W1211 23:34:57.051463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:57.051492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.051544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:57.051556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.863926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1211 23:34:57.864095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.885476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1211 23:34:57.885620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.917754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1211 23:34:57.917919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.989726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1211 23:34:57.989850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.069069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.069572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.106695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.106827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.184398       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:34:58.184487       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1211 23:34:58.212527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1211 23:34:58.212623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.330871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.332038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.331846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.332249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1211 23:35:01.140084       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 11 23:40:59 addons-021354 kubelet[1211]: E1211 23:40:59.933910    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960459933418181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:40:59 addons-021354 kubelet[1211]: E1211 23:40:59.933955    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960459933418181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595934,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864451    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="node-driver-registrar"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864584    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="csi-snapshotter"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864593    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="hostpath"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864600    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a5f87de-b239-4076-baa2-e6e98f3e018b" containerName="volume-snapshot-controller"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864606    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="csi-external-health-monitor-controller"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864612    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c83f1e10-78d8-4652-9020-50342da3a576" containerName="csi-attacher"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864619    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d6b0ccf-5952-4814-a9ab-a8743c2e3c01" containerName="task-pv-container"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864683    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c3966cdf-e310-4ffa-9d98-70eccaabb23b" containerName="volume-snapshot-controller"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864690    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="563bb0d7-c97d-410a-ac13-e968cbe6809f" containerName="csi-resizer"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864698    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="liveness-probe"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: E1211 23:41:04.864716    1211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="csi-provisioner"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864825    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3966cdf-e310-4ffa-9d98-70eccaabb23b" containerName="volume-snapshot-controller"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864891    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="csi-snapshotter"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864902    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5f87de-b239-4076-baa2-e6e98f3e018b" containerName="volume-snapshot-controller"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864907    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="liveness-probe"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864915    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="csi-provisioner"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864921    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d6b0ccf-5952-4814-a9ab-a8743c2e3c01" containerName="task-pv-container"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864930    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="csi-external-health-monitor-controller"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864992    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="hostpath"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.864999    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="563bb0d7-c97d-410a-ac13-e968cbe6809f" containerName="csi-resizer"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.865005    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b465037-83b0-4363-a2e2-16ebd3d3ac4f" containerName="node-driver-registrar"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.865011    1211 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83f1e10-78d8-4652-9020-50342da3a576" containerName="csi-attacher"
	Dec 11 23:41:04 addons-021354 kubelet[1211]: I1211 23:41:04.949370    1211 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2rd\" (UniqueName: \"kubernetes.io/projected/c6f30442-8f8d-47bb-83a7-e35c5f879569-kube-api-access-sh2rd\") pod \"hello-world-app-55bf9c44b4-8b2cl\" (UID: \"c6f30442-8f8d-47bb-83a7-e35c5f879569\") " pod="default/hello-world-app-55bf9c44b4-8b2cl"
	
	
	==> storage-provisioner [40b1897a55f24fb82118a636c26748e4b51ea902683b0e9fe5289033361bf6e1] <==
	I1211 23:35:12.318560       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1211 23:35:12.339534       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1211 23:35:12.339600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1211 23:35:12.349555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1211 23:35:12.349698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-021354_821de941-d5bc-4f30-b71e-a9a2b7db9d21!
	I1211 23:35:12.350335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d4f503c1-2c31-406f-b6bb-801542735018", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-021354_821de941-d5bc-4f30-b71e-a9a2b7db9d21 became leader
	I1211 23:35:12.451577       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-021354_821de941-d5bc-4f30-b71e-a9a2b7db9d21!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-021354 -n addons-021354
helpers_test.go:261: (dbg) Run:  kubectl --context addons-021354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-8b2cl ingress-nginx-admission-create-s7mc9 ingress-nginx-admission-patch-7xpv5
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-021354 describe pod hello-world-app-55bf9c44b4-8b2cl ingress-nginx-admission-create-s7mc9 ingress-nginx-admission-patch-7xpv5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-021354 describe pod hello-world-app-55bf9c44b4-8b2cl ingress-nginx-admission-create-s7mc9 ingress-nginx-admission-patch-7xpv5: exit status 1 (69.398653ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-8b2cl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-021354/192.168.39.225
	Start Time:       Wed, 11 Dec 2024 23:41:04 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sh2rd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sh2rd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-8b2cl to addons-021354
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s7mc9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7xpv5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-021354 describe pod hello-world-app-55bf9c44b4-8b2cl ingress-nginx-admission-create-s7mc9 ingress-nginx-admission-patch-7xpv5: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 addons disable ingress-dns --alsologtostderr -v=1: (1.528913867s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 addons disable ingress --alsologtostderr -v=1: (7.718236264s)
--- FAIL: TestAddons/parallel/Ingress (156.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (360.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.68925ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-v42nk" [277fa5bf-2781-493c-86a5-d170dc8b9237] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004871901s
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (100.590528ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 3m7.420434735s

                                                
                                                
** /stderr **
I1211 23:38:14.422421   93600 retry.go:31] will retry after 3.820934101s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (62.178033ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 3m11.303786431s

                                                
                                                
** /stderr **
I1211 23:38:18.305832   93600 retry.go:31] will retry after 4.309758642s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (70.873605ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 3m15.685762792s

                                                
                                                
** /stderr **
I1211 23:38:22.687685   93600 retry.go:31] will retry after 8.855404514s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (64.311341ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 3m24.606289382s

                                                
                                                
** /stderr **
I1211 23:38:31.608268   93600 retry.go:31] will retry after 5.225115552s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (59.906267ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 3m29.892437394s

                                                
                                                
** /stderr **
I1211 23:38:36.894524   93600 retry.go:31] will retry after 20.92146427s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (64.582894ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 3m50.879496536s

                                                
                                                
** /stderr **
I1211 23:38:57.881421   93600 retry.go:31] will retry after 26.596831015s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (59.87428ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 4m17.537086388s

                                                
                                                
** /stderr **
I1211 23:39:24.538947   93600 retry.go:31] will retry after 46.915307264s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (61.025912ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 5m4.51488235s

                                                
                                                
** /stderr **
I1211 23:40:11.517335   93600 retry.go:31] will retry after 38.407498082s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (66.888459ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 5m42.992664651s

                                                
                                                
** /stderr **
I1211 23:40:49.994847   93600 retry.go:31] will retry after 1m9.424115569s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (60.303472ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 6m52.479538565s

                                                
                                                
** /stderr **
I1211 23:41:59.481619   93600 retry.go:31] will retry after 39.380764612s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (60.152519ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 7m31.921733459s

                                                
                                                
** /stderr **
I1211 23:42:38.924143   93600 retry.go:31] will retry after 1m27.01871761s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-021354 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-021354 top pods -n kube-system: exit status 1 (63.030043ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-bh5l6, age: 8m59.007436729s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-021354 -n addons-021354
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 logs -n 25: (1.225484645s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-531520                                                                     | download-only-531520 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| delete  | -p download-only-596435                                                                     | download-only-596435 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-922560 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC |                     |
	|         | binary-mirror-922560                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39457                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-922560                                                                     | binary-mirror-922560 | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC |                     |
	|         | addons-021354                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC |                     |
	|         | addons-021354                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-021354 --wait=true                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:34 UTC | 11 Dec 24 23:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:37 UTC | 11 Dec 24 23:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | -p addons-021354                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-021354 ip                                                                            | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-021354 ssh cat                                                                       | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | /opt/local-path-provisioner/pvc-6ce29942-9383-4c5e-b256-1d3d7149a74d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC | 11 Dec 24 23:38 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-021354 ssh curl -s                                                                   | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:39 UTC | 11 Dec 24 23:39 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021354 addons                                                                        | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:39 UTC | 11 Dec 24 23:39 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-021354 ip                                                                            | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:41 UTC | 11 Dec 24 23:41 UTC |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:41 UTC | 11 Dec 24 23:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-021354 addons disable                                                                | addons-021354        | jenkins | v1.34.0 | 11 Dec 24 23:41 UTC | 11 Dec 24 23:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:34:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:34:17.941564   94369 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:34:17.941676   94369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:34:17.941686   94369 out.go:358] Setting ErrFile to fd 2...
	I1211 23:34:17.941691   94369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:34:17.941851   94369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:34:17.942483   94369 out.go:352] Setting JSON to false
	I1211 23:34:17.943337   94369 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8200,"bootTime":1733951858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:34:17.943396   94369 start.go:139] virtualization: kvm guest
	I1211 23:34:17.945493   94369 out.go:177] * [addons-021354] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:34:17.946823   94369 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:34:17.946892   94369 notify.go:220] Checking for updates...
	I1211 23:34:17.949318   94369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:34:17.950585   94369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:34:17.951834   94369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:34:17.953374   94369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:34:17.954508   94369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:34:17.955834   94369 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:34:17.989182   94369 out.go:177] * Using the kvm2 driver based on user configuration
	I1211 23:34:17.990314   94369 start.go:297] selected driver: kvm2
	I1211 23:34:17.990327   94369 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:34:17.990341   94369 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:34:17.991051   94369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:34:17.991142   94369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:34:18.006119   94369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:34:18.006171   94369 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:34:18.006426   94369 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:34:18.006457   94369 cni.go:84] Creating CNI manager for ""
	I1211 23:34:18.006500   94369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:34:18.006512   94369 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:34:18.006555   94369 start.go:340] cluster config:
	{Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:34:18.006677   94369 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:34:18.009108   94369 out.go:177] * Starting "addons-021354" primary control-plane node in "addons-021354" cluster
	I1211 23:34:18.010222   94369 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:34:18.010273   94369 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:34:18.010280   94369 cache.go:56] Caching tarball of preloaded images
	I1211 23:34:18.010368   94369 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:34:18.010379   94369 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:34:18.010690   94369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/config.json ...
	I1211 23:34:18.010710   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/config.json: {Name:mk5187adff29800e1ee3705d8e7a6af6bc743940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:18.010857   94369 start.go:360] acquireMachinesLock for addons-021354: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:34:18.010901   94369 start.go:364] duration metric: took 30.807µs to acquireMachinesLock for "addons-021354"
	I1211 23:34:18.010919   94369 start.go:93] Provisioning new machine with config: &{Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:34:18.010985   94369 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:34:18.013328   94369 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1211 23:34:18.013517   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:34:18.013561   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:34:18.028613   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42645
	I1211 23:34:18.029137   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:34:18.029680   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:34:18.029700   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:34:18.030091   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:34:18.030252   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:18.030393   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:18.030497   94369 start.go:159] libmachine.API.Create for "addons-021354" (driver="kvm2")
	I1211 23:34:18.030523   94369 client.go:168] LocalClient.Create starting
	I1211 23:34:18.030561   94369 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:34:18.100453   94369 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:34:18.277891   94369 main.go:141] libmachine: Running pre-create checks...
	I1211 23:34:18.277918   94369 main.go:141] libmachine: (addons-021354) Calling .PreCreateCheck
	I1211 23:34:18.278482   94369 main.go:141] libmachine: (addons-021354) Calling .GetConfigRaw
	I1211 23:34:18.279015   94369 main.go:141] libmachine: Creating machine...
	I1211 23:34:18.279035   94369 main.go:141] libmachine: (addons-021354) Calling .Create
	I1211 23:34:18.279259   94369 main.go:141] libmachine: (addons-021354) Creating KVM machine...
	I1211 23:34:18.280592   94369 main.go:141] libmachine: (addons-021354) DBG | found existing default KVM network
	I1211 23:34:18.281485   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.281311   94392 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
	I1211 23:34:18.281535   94369 main.go:141] libmachine: (addons-021354) DBG | created network xml: 
	I1211 23:34:18.281559   94369 main.go:141] libmachine: (addons-021354) DBG | <network>
	I1211 23:34:18.281569   94369 main.go:141] libmachine: (addons-021354) DBG |   <name>mk-addons-021354</name>
	I1211 23:34:18.281577   94369 main.go:141] libmachine: (addons-021354) DBG |   <dns enable='no'/>
	I1211 23:34:18.281583   94369 main.go:141] libmachine: (addons-021354) DBG |   
	I1211 23:34:18.281590   94369 main.go:141] libmachine: (addons-021354) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1211 23:34:18.281595   94369 main.go:141] libmachine: (addons-021354) DBG |     <dhcp>
	I1211 23:34:18.281600   94369 main.go:141] libmachine: (addons-021354) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1211 23:34:18.281606   94369 main.go:141] libmachine: (addons-021354) DBG |     </dhcp>
	I1211 23:34:18.281613   94369 main.go:141] libmachine: (addons-021354) DBG |   </ip>
	I1211 23:34:18.281621   94369 main.go:141] libmachine: (addons-021354) DBG |   
	I1211 23:34:18.281625   94369 main.go:141] libmachine: (addons-021354) DBG | </network>
	I1211 23:34:18.281631   94369 main.go:141] libmachine: (addons-021354) DBG | 
	I1211 23:34:18.286957   94369 main.go:141] libmachine: (addons-021354) DBG | trying to create private KVM network mk-addons-021354 192.168.39.0/24...
	I1211 23:34:18.357480   94369 main.go:141] libmachine: (addons-021354) DBG | private KVM network mk-addons-021354 192.168.39.0/24 created
	I1211 23:34:18.357507   94369 main.go:141] libmachine: (addons-021354) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354 ...
	I1211 23:34:18.357532   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.357422   94392 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:34:18.357543   94369 main.go:141] libmachine: (addons-021354) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:34:18.357558   94369 main.go:141] libmachine: (addons-021354) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:34:18.643058   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.642882   94392 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa...
	I1211 23:34:18.745328   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.745187   94392 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/addons-021354.rawdisk...
	I1211 23:34:18.745376   94369 main.go:141] libmachine: (addons-021354) DBG | Writing magic tar header
	I1211 23:34:18.745386   94369 main.go:141] libmachine: (addons-021354) DBG | Writing SSH key tar header
	I1211 23:34:18.745393   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:18.745317   94392 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354 ...
	I1211 23:34:18.745414   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354
	I1211 23:34:18.745428   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354 (perms=drwx------)
	I1211 23:34:18.745447   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:34:18.745454   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:34:18.745459   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:34:18.745468   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:34:18.745477   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:34:18.745483   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:34:18.745491   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:34:18.745515   94369 main.go:141] libmachine: (addons-021354) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:34:18.745519   94369 main.go:141] libmachine: (addons-021354) Creating domain...
	I1211 23:34:18.745576   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:34:18.745606   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:34:18.745620   94369 main.go:141] libmachine: (addons-021354) DBG | Checking permissions on dir: /home
	I1211 23:34:18.745630   94369 main.go:141] libmachine: (addons-021354) DBG | Skipping /home - not owner
	I1211 23:34:18.746981   94369 main.go:141] libmachine: (addons-021354) define libvirt domain using xml: 
	I1211 23:34:18.747019   94369 main.go:141] libmachine: (addons-021354) <domain type='kvm'>
	I1211 23:34:18.747030   94369 main.go:141] libmachine: (addons-021354)   <name>addons-021354</name>
	I1211 23:34:18.747037   94369 main.go:141] libmachine: (addons-021354)   <memory unit='MiB'>4000</memory>
	I1211 23:34:18.747045   94369 main.go:141] libmachine: (addons-021354)   <vcpu>2</vcpu>
	I1211 23:34:18.747054   94369 main.go:141] libmachine: (addons-021354)   <features>
	I1211 23:34:18.747073   94369 main.go:141] libmachine: (addons-021354)     <acpi/>
	I1211 23:34:18.747087   94369 main.go:141] libmachine: (addons-021354)     <apic/>
	I1211 23:34:18.747108   94369 main.go:141] libmachine: (addons-021354)     <pae/>
	I1211 23:34:18.747118   94369 main.go:141] libmachine: (addons-021354)     
	I1211 23:34:18.747128   94369 main.go:141] libmachine: (addons-021354)   </features>
	I1211 23:34:18.747137   94369 main.go:141] libmachine: (addons-021354)   <cpu mode='host-passthrough'>
	I1211 23:34:18.747143   94369 main.go:141] libmachine: (addons-021354)   
	I1211 23:34:18.747161   94369 main.go:141] libmachine: (addons-021354)   </cpu>
	I1211 23:34:18.747172   94369 main.go:141] libmachine: (addons-021354)   <os>
	I1211 23:34:18.747185   94369 main.go:141] libmachine: (addons-021354)     <type>hvm</type>
	I1211 23:34:18.747219   94369 main.go:141] libmachine: (addons-021354)     <boot dev='cdrom'/>
	I1211 23:34:18.747246   94369 main.go:141] libmachine: (addons-021354)     <boot dev='hd'/>
	I1211 23:34:18.747279   94369 main.go:141] libmachine: (addons-021354)     <bootmenu enable='no'/>
	I1211 23:34:18.747298   94369 main.go:141] libmachine: (addons-021354)   </os>
	I1211 23:34:18.747307   94369 main.go:141] libmachine: (addons-021354)   <devices>
	I1211 23:34:18.747315   94369 main.go:141] libmachine: (addons-021354)     <disk type='file' device='cdrom'>
	I1211 23:34:18.747323   94369 main.go:141] libmachine: (addons-021354)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/boot2docker.iso'/>
	I1211 23:34:18.747330   94369 main.go:141] libmachine: (addons-021354)       <target dev='hdc' bus='scsi'/>
	I1211 23:34:18.747335   94369 main.go:141] libmachine: (addons-021354)       <readonly/>
	I1211 23:34:18.747342   94369 main.go:141] libmachine: (addons-021354)     </disk>
	I1211 23:34:18.747348   94369 main.go:141] libmachine: (addons-021354)     <disk type='file' device='disk'>
	I1211 23:34:18.747355   94369 main.go:141] libmachine: (addons-021354)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:34:18.747363   94369 main.go:141] libmachine: (addons-021354)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/addons-021354.rawdisk'/>
	I1211 23:34:18.747370   94369 main.go:141] libmachine: (addons-021354)       <target dev='hda' bus='virtio'/>
	I1211 23:34:18.747375   94369 main.go:141] libmachine: (addons-021354)     </disk>
	I1211 23:34:18.747384   94369 main.go:141] libmachine: (addons-021354)     <interface type='network'>
	I1211 23:34:18.747390   94369 main.go:141] libmachine: (addons-021354)       <source network='mk-addons-021354'/>
	I1211 23:34:18.747400   94369 main.go:141] libmachine: (addons-021354)       <model type='virtio'/>
	I1211 23:34:18.747405   94369 main.go:141] libmachine: (addons-021354)     </interface>
	I1211 23:34:18.747416   94369 main.go:141] libmachine: (addons-021354)     <interface type='network'>
	I1211 23:34:18.747428   94369 main.go:141] libmachine: (addons-021354)       <source network='default'/>
	I1211 23:34:18.747435   94369 main.go:141] libmachine: (addons-021354)       <model type='virtio'/>
	I1211 23:34:18.747440   94369 main.go:141] libmachine: (addons-021354)     </interface>
	I1211 23:34:18.747446   94369 main.go:141] libmachine: (addons-021354)     <serial type='pty'>
	I1211 23:34:18.747463   94369 main.go:141] libmachine: (addons-021354)       <target port='0'/>
	I1211 23:34:18.747469   94369 main.go:141] libmachine: (addons-021354)     </serial>
	I1211 23:34:18.747475   94369 main.go:141] libmachine: (addons-021354)     <console type='pty'>
	I1211 23:34:18.747484   94369 main.go:141] libmachine: (addons-021354)       <target type='serial' port='0'/>
	I1211 23:34:18.747490   94369 main.go:141] libmachine: (addons-021354)     </console>
	I1211 23:34:18.747496   94369 main.go:141] libmachine: (addons-021354)     <rng model='virtio'>
	I1211 23:34:18.747502   94369 main.go:141] libmachine: (addons-021354)       <backend model='random'>/dev/random</backend>
	I1211 23:34:18.747508   94369 main.go:141] libmachine: (addons-021354)     </rng>
	I1211 23:34:18.747512   94369 main.go:141] libmachine: (addons-021354)     
	I1211 23:34:18.747518   94369 main.go:141] libmachine: (addons-021354)     
	I1211 23:34:18.747522   94369 main.go:141] libmachine: (addons-021354)   </devices>
	I1211 23:34:18.747528   94369 main.go:141] libmachine: (addons-021354) </domain>
	I1211 23:34:18.747536   94369 main.go:141] libmachine: (addons-021354) 
	I1211 23:34:18.752053   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:19:d9:41 in network default
	I1211 23:34:18.752719   94369 main.go:141] libmachine: (addons-021354) Ensuring networks are active...
	I1211 23:34:18.752748   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:18.753643   94369 main.go:141] libmachine: (addons-021354) Ensuring network default is active
	I1211 23:34:18.754178   94369 main.go:141] libmachine: (addons-021354) Ensuring network mk-addons-021354 is active
	I1211 23:34:18.754729   94369 main.go:141] libmachine: (addons-021354) Getting domain xml...
	I1211 23:34:18.755564   94369 main.go:141] libmachine: (addons-021354) Creating domain...
	I1211 23:34:19.961705   94369 main.go:141] libmachine: (addons-021354) Waiting to get IP...
	I1211 23:34:19.962505   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:19.962873   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:19.962908   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:19.962860   94392 retry.go:31] will retry after 218.55825ms: waiting for machine to come up
	I1211 23:34:20.183538   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:20.183998   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:20.184029   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:20.183958   94392 retry.go:31] will retry after 278.620642ms: waiting for machine to come up
	I1211 23:34:20.464621   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:20.465135   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:20.465158   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:20.465090   94392 retry.go:31] will retry after 457.396089ms: waiting for machine to come up
	I1211 23:34:20.923898   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:20.924379   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:20.924405   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:20.924344   94392 retry.go:31] will retry after 367.140818ms: waiting for machine to come up
	I1211 23:34:21.292951   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:21.293415   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:21.293444   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:21.293366   94392 retry.go:31] will retry after 528.658319ms: waiting for machine to come up
	I1211 23:34:21.824318   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:21.824736   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:21.824760   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:21.824703   94392 retry.go:31] will retry after 693.958686ms: waiting for machine to come up
	I1211 23:34:22.520831   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:22.521279   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:22.521310   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:22.521224   94392 retry.go:31] will retry after 1.049432061s: waiting for machine to come up
	I1211 23:34:23.571993   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:23.572530   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:23.572561   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:23.572469   94392 retry.go:31] will retry after 1.299191566s: waiting for machine to come up
	I1211 23:34:24.874165   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:24.874604   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:24.874624   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:24.874550   94392 retry.go:31] will retry after 1.848004594s: waiting for machine to come up
	I1211 23:34:26.724008   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:26.724509   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:26.724535   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:26.724456   94392 retry.go:31] will retry after 2.062176111s: waiting for machine to come up
	I1211 23:34:28.787705   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:28.788119   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:28.788141   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:28.788070   94392 retry.go:31] will retry after 2.215274562s: waiting for machine to come up
	I1211 23:34:31.006847   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:31.007373   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:31.007401   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:31.007334   94392 retry.go:31] will retry after 2.679029007s: waiting for machine to come up
	I1211 23:34:33.688071   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:33.688469   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:33.688492   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:33.688427   94392 retry.go:31] will retry after 4.244655837s: waiting for machine to come up
	I1211 23:34:37.937787   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:37.938128   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find current IP address of domain addons-021354 in network mk-addons-021354
	I1211 23:34:37.938153   94369 main.go:141] libmachine: (addons-021354) DBG | I1211 23:34:37.938085   94392 retry.go:31] will retry after 3.67328737s: waiting for machine to come up
	I1211 23:34:41.615770   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.616215   94369 main.go:141] libmachine: (addons-021354) Found IP for machine: 192.168.39.225
	I1211 23:34:41.616231   94369 main.go:141] libmachine: (addons-021354) Reserving static IP address...
	I1211 23:34:41.616241   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has current primary IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.616643   94369 main.go:141] libmachine: (addons-021354) DBG | unable to find host DHCP lease matching {name: "addons-021354", mac: "52:54:00:f7:1d:ff", ip: "192.168.39.225"} in network mk-addons-021354
	I1211 23:34:41.691586   94369 main.go:141] libmachine: (addons-021354) DBG | Getting to WaitForSSH function...
	I1211 23:34:41.691641   94369 main.go:141] libmachine: (addons-021354) Reserved static IP address: 192.168.39.225
	I1211 23:34:41.691654   94369 main.go:141] libmachine: (addons-021354) Waiting for SSH to be available...
	I1211 23:34:41.694518   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.695077   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:41.695106   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.695302   94369 main.go:141] libmachine: (addons-021354) DBG | Using SSH client type: external
	I1211 23:34:41.695329   94369 main.go:141] libmachine: (addons-021354) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa (-rw-------)
	I1211 23:34:41.695375   94369 main.go:141] libmachine: (addons-021354) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:34:41.695390   94369 main.go:141] libmachine: (addons-021354) DBG | About to run SSH command:
	I1211 23:34:41.695401   94369 main.go:141] libmachine: (addons-021354) DBG | exit 0
	I1211 23:34:41.819929   94369 main.go:141] libmachine: (addons-021354) DBG | SSH cmd err, output: <nil>: 
	I1211 23:34:41.820213   94369 main.go:141] libmachine: (addons-021354) KVM machine creation complete!
	I1211 23:34:41.820583   94369 main.go:141] libmachine: (addons-021354) Calling .GetConfigRaw
	I1211 23:34:41.821223   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:41.821411   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:41.821624   94369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1211 23:34:41.821644   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:34:41.823336   94369 main.go:141] libmachine: Detecting operating system of created instance...
	I1211 23:34:41.823377   94369 main.go:141] libmachine: Waiting for SSH to be available...
	I1211 23:34:41.823383   94369 main.go:141] libmachine: Getting to WaitForSSH function...
	I1211 23:34:41.823389   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:41.826349   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.826693   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:41.826723   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.826856   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:41.827049   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.827231   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.827385   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:41.827552   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:41.827781   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:41.827796   94369 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1211 23:34:41.934931   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:34:41.934959   94369 main.go:141] libmachine: Detecting the provisioner...
	I1211 23:34:41.934972   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:41.937932   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.938305   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:41.938340   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:41.938483   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:41.938717   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.938872   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:41.939025   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:41.939203   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:41.939388   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:41.939399   94369 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1211 23:34:42.048570   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1211 23:34:42.048701   94369 main.go:141] libmachine: found compatible host: buildroot
	I1211 23:34:42.048712   94369 main.go:141] libmachine: Provisioning with buildroot...
	I1211 23:34:42.048721   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:42.048996   94369 buildroot.go:166] provisioning hostname "addons-021354"
	I1211 23:34:42.049026   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:42.049237   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.052181   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.052558   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.052584   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.052722   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.052907   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.053149   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.053326   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.053503   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.053683   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.053694   94369 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-021354 && echo "addons-021354" | sudo tee /etc/hostname
	I1211 23:34:42.174313   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-021354
	
	I1211 23:34:42.174385   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.177253   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.177597   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.177621   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.177816   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.177975   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.178096   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.178207   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.178385   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.178559   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.178574   94369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-021354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-021354/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-021354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:34:42.293305   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:34:42.293343   94369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1211 23:34:42.293405   94369 buildroot.go:174] setting up certificates
	I1211 23:34:42.293426   94369 provision.go:84] configureAuth start
	I1211 23:34:42.293440   94369 main.go:141] libmachine: (addons-021354) Calling .GetMachineName
	I1211 23:34:42.293761   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:42.296271   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.296595   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.296633   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.296853   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.299029   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.299361   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.299403   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.299508   94369 provision.go:143] copyHostCerts
	I1211 23:34:42.299587   94369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1211 23:34:42.299791   94369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1211 23:34:42.299896   94369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1211 23:34:42.300021   94369 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.addons-021354 san=[127.0.0.1 192.168.39.225 addons-021354 localhost minikube]
	I1211 23:34:42.379626   94369 provision.go:177] copyRemoteCerts
	I1211 23:34:42.379701   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:34:42.379729   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.382464   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.382775   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.382804   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.383011   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.383212   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.383386   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.383532   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:42.466523   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 23:34:42.491645   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1211 23:34:42.515877   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 23:34:42.540074   94369 provision.go:87] duration metric: took 246.632691ms to configureAuth
	I1211 23:34:42.540124   94369 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:34:42.540357   94369 config.go:182] Loaded profile config "addons-021354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:34:42.540479   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.543110   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.543450   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.543484   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.543684   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.543877   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.544034   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.544152   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.544297   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.544455   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.544469   94369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:34:42.771254   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:34:42.771290   94369 main.go:141] libmachine: Checking connection to Docker...
	I1211 23:34:42.771298   94369 main.go:141] libmachine: (addons-021354) Calling .GetURL
	I1211 23:34:42.772695   94369 main.go:141] libmachine: (addons-021354) DBG | Using libvirt version 6000000
	I1211 23:34:42.774805   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.775131   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.775164   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.775353   94369 main.go:141] libmachine: Docker is up and running!
	I1211 23:34:42.775371   94369 main.go:141] libmachine: Reticulating splines...
	I1211 23:34:42.775382   94369 client.go:171] duration metric: took 24.744846556s to LocalClient.Create
	I1211 23:34:42.775408   94369 start.go:167] duration metric: took 24.744911505s to libmachine.API.Create "addons-021354"
	I1211 23:34:42.775429   94369 start.go:293] postStartSetup for "addons-021354" (driver="kvm2")
	I1211 23:34:42.775443   94369 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:34:42.775467   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.775735   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:34:42.775762   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.777894   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.778200   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.778231   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.778355   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.778522   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.778652   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.778778   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:42.862454   94369 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:34:42.866931   94369 info.go:137] Remote host: Buildroot 2023.02.9
	I1211 23:34:42.866959   94369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1211 23:34:42.867067   94369 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1211 23:34:42.867093   94369 start.go:296] duration metric: took 91.655173ms for postStartSetup
	I1211 23:34:42.867139   94369 main.go:141] libmachine: (addons-021354) Calling .GetConfigRaw
	I1211 23:34:42.867784   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:42.870295   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.870709   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.870738   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.870975   94369 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/config.json ...
	I1211 23:34:42.871155   94369 start.go:128] duration metric: took 24.860159392s to createHost
	I1211 23:34:42.871182   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.873555   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.873901   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.873935   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.874051   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.874253   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.874445   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.874573   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.874744   94369 main.go:141] libmachine: Using SSH client type: native
	I1211 23:34:42.874898   94369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.225 22 <nil> <nil>}
	I1211 23:34:42.874908   94369 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:34:42.980743   94369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733960082.946357302
	
	I1211 23:34:42.980778   94369 fix.go:216] guest clock: 1733960082.946357302
	I1211 23:34:42.980787   94369 fix.go:229] Guest: 2024-12-11 23:34:42.946357302 +0000 UTC Remote: 2024-12-11 23:34:42.871169504 +0000 UTC m=+24.967954718 (delta=75.187798ms)
	I1211 23:34:42.980827   94369 fix.go:200] guest clock delta is within tolerance: 75.187798ms
	I1211 23:34:42.980835   94369 start.go:83] releasing machines lock for "addons-021354", held for 24.969923936s
	I1211 23:34:42.980858   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.981203   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:42.983909   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.984245   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.984273   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.984407   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.984897   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.985080   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:34:42.985188   94369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:34:42.985247   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.985295   94369 ssh_runner.go:195] Run: cat /version.json
	I1211 23:34:42.985323   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:34:42.987869   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988121   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988211   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.988239   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988341   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.988435   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:42.988504   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:42.988522   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.988588   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:34:42.988659   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.988757   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:34:42.988784   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:42.988874   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:34:42.988994   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:34:43.091309   94369 ssh_runner.go:195] Run: systemctl --version
	I1211 23:34:43.097573   94369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:34:43.255661   94369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:34:43.262937   94369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:34:43.263022   94369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:34:43.279351   94369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:34:43.279386   94369 start.go:495] detecting cgroup driver to use...
	I1211 23:34:43.279468   94369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:34:43.294921   94369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:34:43.309278   94369 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:34:43.309335   94369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:34:43.323080   94369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:34:43.336590   94369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:34:43.452359   94369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:34:43.617934   94369 docker.go:233] disabling docker service ...
	I1211 23:34:43.618010   94369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:34:43.632379   94369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:34:43.645493   94369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:34:43.779942   94369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:34:43.905781   94369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:34:43.920043   94369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:34:43.938665   94369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:34:43.938740   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.949165   94369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:34:43.949252   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.959721   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.969897   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:43.980430   94369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:34:43.991220   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:44.001386   94369 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:44.018913   94369 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:34:44.029279   94369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:34:44.038630   94369 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:34:44.038683   94369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:34:44.051397   94369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:34:44.061670   94369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:34:44.183084   94369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:34:44.273357   94369 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:34:44.273444   94369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:34:44.278366   94369 start.go:563] Will wait 60s for crictl version
	I1211 23:34:44.278436   94369 ssh_runner.go:195] Run: which crictl
	I1211 23:34:44.282275   94369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:34:44.325117   94369 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:34:44.325238   94369 ssh_runner.go:195] Run: crio --version
	I1211 23:34:44.353131   94369 ssh_runner.go:195] Run: crio --version
	I1211 23:34:44.383439   94369 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1211 23:34:44.385013   94369 main.go:141] libmachine: (addons-021354) Calling .GetIP
	I1211 23:34:44.387971   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:44.388320   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:34:44.388352   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:34:44.388617   94369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:34:44.393042   94369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:34:44.406459   94369 kubeadm.go:883] updating cluster {Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:34:44.406571   94369 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:34:44.406621   94369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:34:44.440300   94369 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1211 23:34:44.440371   94369 ssh_runner.go:195] Run: which lz4
	I1211 23:34:44.444596   94369 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:34:44.448992   94369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:34:44.449028   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1211 23:34:45.749547   94369 crio.go:462] duration metric: took 1.304999714s to copy over tarball
	I1211 23:34:45.749631   94369 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:34:47.887053   94369 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.137380299s)
	I1211 23:34:47.887097   94369 crio.go:469] duration metric: took 2.137514144s to extract the tarball
	I1211 23:34:47.887111   94369 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:34:47.925261   94369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:34:47.970564   94369 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:34:47.970598   94369 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:34:47.970610   94369 kubeadm.go:934] updating node { 192.168.39.225 8443 v1.31.2 crio true true} ...
	I1211 23:34:47.970780   94369 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-021354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:34:47.970886   94369 ssh_runner.go:195] Run: crio config
	I1211 23:34:48.019180   94369 cni.go:84] Creating CNI manager for ""
	I1211 23:34:48.019206   94369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:34:48.019220   94369 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:34:48.019241   94369 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.225 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-021354 NodeName:addons-021354 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:34:48.019387   94369 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-021354"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.225"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.225"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:34:48.019464   94369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:34:48.030214   94369 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:34:48.030305   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1211 23:34:48.040711   94369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1211 23:34:48.058366   94369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:34:48.075616   94369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1211 23:34:48.093085   94369 ssh_runner.go:195] Run: grep 192.168.39.225	control-plane.minikube.internal$ /etc/hosts
	I1211 23:34:48.097302   94369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:34:48.110395   94369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:34:48.234458   94369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:34:48.251528   94369 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354 for IP: 192.168.39.225
	I1211 23:34:48.251553   94369 certs.go:194] generating shared ca certs ...
	I1211 23:34:48.251570   94369 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.251738   94369 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1211 23:34:48.320769   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt ...
	I1211 23:34:48.320801   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt: {Name:mk18b608077b42fcba0e790a13db29beca86d40c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.321010   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key ...
	I1211 23:34:48.321030   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key: {Name:mk2b0a248c0dc5d6780db8d7389e3ce61a08ccca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.321149   94369 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1211 23:34:48.534784   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt ...
	I1211 23:34:48.534817   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt: {Name:mkb7f6f01c296a3f917af5c8a02f5476362bdc37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.535023   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key ...
	I1211 23:34:48.535042   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key: {Name:mk7dfc75f1bbd84ca395fd67ba0905ce60c57d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.535168   94369 certs.go:256] generating profile certs ...
	I1211 23:34:48.535264   94369 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.key
	I1211 23:34:48.535285   94369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt with IP's: []
	I1211 23:34:48.753672   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt ...
	I1211 23:34:48.753707   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: {Name:mk66efaed89910931834575b7294af4c2524ef5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.753901   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.key ...
	I1211 23:34:48.753933   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.key: {Name:mkae769d04795e681b2a27f0079fb20a11c3e804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.754055   94369 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e
	I1211 23:34:48.754082   94369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.225]
	I1211 23:34:48.844857   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e ...
	I1211 23:34:48.844890   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e: {Name:mk77eeaebd337092de4f92552ce2038f6245cb1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.845075   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e ...
	I1211 23:34:48.845099   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e: {Name:mk7e974e7014007323859a66a208a75ba3d46736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.845193   94369 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt.a60bf79e -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt
	I1211 23:34:48.845296   94369 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key.a60bf79e -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key
	I1211 23:34:48.845367   94369 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key
	I1211 23:34:48.845390   94369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt with IP's: []
	I1211 23:34:48.905793   94369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt ...
	I1211 23:34:48.905822   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt: {Name:mkba9992178a3089f32a431c493454ddca2f3a3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.906004   94369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key ...
	I1211 23:34:48.906028   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key: {Name:mkfc139c2f20d1c2a8344c445c646ad58142ed8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:34:48.906265   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:34:48.906308   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1211 23:34:48.906355   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:34:48.906392   94369 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1211 23:34:48.907110   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:34:48.945595   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:34:48.974831   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:34:49.002072   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:34:49.026606   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1211 23:34:49.051242   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1211 23:34:49.075778   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:34:49.100139   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1211 23:34:49.124995   94369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:34:49.149379   94369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:34:49.166340   94369 ssh_runner.go:195] Run: openssl version
	I1211 23:34:49.172319   94369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:34:49.183032   94369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:34:49.187622   94369 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:34:49.187700   94369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:34:49.193473   94369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:34:49.204472   94369 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:34:49.208779   94369 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:34:49.208842   94369 kubeadm.go:392] StartCluster: {Name:addons-021354 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-021354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:34:49.208957   94369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:34:49.209033   94369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:34:49.244486   94369 cri.go:89] found id: ""
	I1211 23:34:49.244584   94369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:34:49.254780   94369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:34:49.265481   94369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:34:49.277627   94369 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:34:49.277650   94369 kubeadm.go:157] found existing configuration files:
	
	I1211 23:34:49.277699   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:34:49.287135   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:34:49.287200   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:34:49.296941   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:34:49.306402   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:34:49.306469   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:34:49.316405   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:34:49.325919   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:34:49.325988   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:34:49.335575   94369 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:34:49.344724   94369 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:34:49.344787   94369 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:34:49.354496   94369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:34:49.518965   94369 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:35:00.177960   94369 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:35:00.178054   94369 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:35:00.178136   94369 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:35:00.178217   94369 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:35:00.178295   94369 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:35:00.178418   94369 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:35:00.180050   94369 out.go:235]   - Generating certificates and keys ...
	I1211 23:35:00.180154   94369 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:35:00.180222   94369 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:35:00.180290   94369 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:35:00.180338   94369 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:35:00.180389   94369 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:35:00.180456   94369 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:35:00.180539   94369 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:35:00.180703   94369 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-021354 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I1211 23:35:00.180759   94369 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:35:00.180879   94369 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-021354 localhost] and IPs [192.168.39.225 127.0.0.1 ::1]
	I1211 23:35:00.180938   94369 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:35:00.181005   94369 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:35:00.181050   94369 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:35:00.181102   94369 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:35:00.181146   94369 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:35:00.181198   94369 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:35:00.181247   94369 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:35:00.181313   94369 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:35:00.181420   94369 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:35:00.181545   94369 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:35:00.181647   94369 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:35:00.182986   94369 out.go:235]   - Booting up control plane ...
	I1211 23:35:00.183083   94369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:35:00.183170   94369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:35:00.183268   94369 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:35:00.183397   94369 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:35:00.183487   94369 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:35:00.183521   94369 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:35:00.183654   94369 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:35:00.183746   94369 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:35:00.183804   94369 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00206526s
	I1211 23:35:00.183891   94369 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:35:00.183965   94369 kubeadm.go:310] [api-check] The API server is healthy after 5.002956694s
	I1211 23:35:00.184093   94369 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:35:00.184260   94369 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:35:00.184359   94369 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:35:00.184582   94369 kubeadm.go:310] [mark-control-plane] Marking the node addons-021354 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:35:00.184668   94369 kubeadm.go:310] [bootstrap-token] Using token: fkc42n.k8j80h5ids5wbhf0
	I1211 23:35:00.186809   94369 out.go:235]   - Configuring RBAC rules ...
	I1211 23:35:00.186933   94369 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:35:00.187022   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:35:00.187170   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:35:00.187328   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:35:00.187475   94369 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:35:00.187623   94369 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:35:00.187727   94369 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:35:00.187780   94369 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:35:00.187854   94369 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:35:00.187864   94369 kubeadm.go:310] 
	I1211 23:35:00.187945   94369 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:35:00.187959   94369 kubeadm.go:310] 
	I1211 23:35:00.188109   94369 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:35:00.188125   94369 kubeadm.go:310] 
	I1211 23:35:00.188166   94369 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:35:00.188253   94369 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:35:00.188336   94369 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:35:00.188345   94369 kubeadm.go:310] 
	I1211 23:35:00.188421   94369 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:35:00.188431   94369 kubeadm.go:310] 
	I1211 23:35:00.188504   94369 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:35:00.188512   94369 kubeadm.go:310] 
	I1211 23:35:00.188592   94369 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:35:00.188707   94369 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:35:00.188765   94369 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:35:00.188771   94369 kubeadm.go:310] 
	I1211 23:35:00.188843   94369 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:35:00.188906   94369 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:35:00.188912   94369 kubeadm.go:310] 
	I1211 23:35:00.189009   94369 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fkc42n.k8j80h5ids5wbhf0 \
	I1211 23:35:00.189102   94369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1211 23:35:00.189122   94369 kubeadm.go:310] 	--control-plane 
	I1211 23:35:00.189129   94369 kubeadm.go:310] 
	I1211 23:35:00.189196   94369 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:35:00.189203   94369 kubeadm.go:310] 
	I1211 23:35:00.189267   94369 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fkc42n.k8j80h5ids5wbhf0 \
	I1211 23:35:00.189376   94369 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1211 23:35:00.189387   94369 cni.go:84] Creating CNI manager for ""
	I1211 23:35:00.189397   94369 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:35:00.190908   94369 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1211 23:35:00.192283   94369 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1211 23:35:00.203788   94369 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1211 23:35:00.225264   94369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:35:00.225411   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:00.225418   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-021354 minikube.k8s.io/updated_at=2024_12_11T23_35_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=addons-021354 minikube.k8s.io/primary=true
	I1211 23:35:00.273435   94369 ops.go:34] apiserver oom_adj: -16
	I1211 23:35:00.355692   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:00.855791   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:01.356376   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:01.856632   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:02.355728   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:02.856575   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:03.356570   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:03.855769   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:04.356625   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:04.856042   94369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:35:04.990028   94369 kubeadm.go:1113] duration metric: took 4.764699966s to wait for elevateKubeSystemPrivileges
	I1211 23:35:04.990073   94369 kubeadm.go:394] duration metric: took 15.781235624s to StartCluster
	I1211 23:35:04.990099   94369 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:35:04.990241   94369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:35:04.990639   94369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:35:04.990855   94369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:35:04.990898   94369 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.225 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:35:04.991006   94369 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1211 23:35:04.991129   94369 config.go:182] Loaded profile config "addons-021354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:35:04.991145   94369 addons.go:69] Setting yakd=true in profile "addons-021354"
	I1211 23:35:04.991149   94369 addons.go:69] Setting default-storageclass=true in profile "addons-021354"
	I1211 23:35:04.991166   94369 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-021354"
	I1211 23:35:04.991175   94369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-021354"
	I1211 23:35:04.991181   94369 addons.go:69] Setting registry=true in profile "addons-021354"
	I1211 23:35:04.991188   94369 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-021354"
	I1211 23:35:04.991195   94369 addons.go:69] Setting storage-provisioner=true in profile "addons-021354"
	I1211 23:35:04.991199   94369 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-021354"
	I1211 23:35:04.991213   94369 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-021354"
	I1211 23:35:04.991224   94369 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-021354"
	I1211 23:35:04.991231   94369 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-021354"
	I1211 23:35:04.991242   94369 addons.go:69] Setting gcp-auth=true in profile "addons-021354"
	I1211 23:35:04.991258   94369 mustload.go:65] Loading cluster: addons-021354
	I1211 23:35:04.991266   94369 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-021354"
	I1211 23:35:04.991269   94369 addons.go:69] Setting volumesnapshots=true in profile "addons-021354"
	I1211 23:35:04.991288   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991301   94369 addons.go:234] Setting addon volumesnapshots=true in "addons-021354"
	I1211 23:35:04.991320   94369 addons.go:69] Setting ingress=true in profile "addons-021354"
	I1211 23:35:04.991341   94369 addons.go:234] Setting addon ingress=true in "addons-021354"
	I1211 23:35:04.991356   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991368   94369 addons.go:69] Setting ingress-dns=true in profile "addons-021354"
	I1211 23:35:04.991385   94369 addons.go:234] Setting addon ingress-dns=true in "addons-021354"
	I1211 23:35:04.991386   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991430   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991456   94369 config.go:182] Loaded profile config "addons-021354": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:35:04.991702   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991722   94369 addons.go:69] Setting volcano=true in profile "addons-021354"
	I1211 23:35:04.991190   94369 addons.go:234] Setting addon registry=true in "addons-021354"
	I1211 23:35:04.991735   94369 addons.go:234] Setting addon volcano=true in "addons-021354"
	I1211 23:35:04.991750   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991780   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991808   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991816   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991824   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991846   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991215   94369 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-021354"
	I1211 23:35:04.991299   94369 addons.go:69] Setting inspektor-gadget=true in profile "addons-021354"
	I1211 23:35:04.991898   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991905   94369 addons.go:234] Setting addon inspektor-gadget=true in "addons-021354"
	I1211 23:35:04.991909   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991928   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991204   94369 addons.go:234] Setting addon storage-provisioner=true in "addons-021354"
	I1211 23:35:04.992227   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.991928   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.992287   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991233   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991753   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991705   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.992499   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991190   94369 addons.go:69] Setting cloud-spanner=true in profile "addons-021354"
	I1211 23:35:04.992672   94369 addons.go:234] Setting addon cloud-spanner=true in "addons-021354"
	I1211 23:35:04.992683   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.992703   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.992715   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.992805   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.992837   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991758   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.991175   94369 addons.go:234] Setting addon yakd=true in "addons-021354"
	I1211 23:35:04.993062   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.993080   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.993089   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.993246   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.993292   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991768   94369 addons.go:69] Setting metrics-server=true in profile "addons-021354"
	I1211 23:35:04.993382   94369 addons.go:234] Setting addon metrics-server=true in "addons-021354"
	I1211 23:35:04.993423   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:04.993797   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.993859   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991768   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.996462   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.991791   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:04.996595   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:04.996631   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.005963   94369 out.go:177] * Verifying Kubernetes components...
	I1211 23:35:04.992261   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.012723   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41317
	I1211 23:35:05.012951   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I1211 23:35:05.013311   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.013552   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.013965   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.013995   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.014335   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.014391   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.014411   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.014803   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.015000   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.015019   94369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:35:05.015129   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I1211 23:35:05.015034   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.015196   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.015474   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.016016   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.016040   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.016437   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.018225   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41111
	I1211 23:35:05.022613   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1211 23:35:05.023934   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.023985   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.024352   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.024400   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.025922   94369 addons.go:234] Setting addon default-storageclass=true in "addons-021354"
	I1211 23:35:05.025990   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:05.026375   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.026419   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.026827   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.026865   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.027372   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.027523   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.027604   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I1211 23:35:05.028068   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.028088   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.028169   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.028858   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.029036   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.029060   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.029222   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.029234   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.029499   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.030120   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.030159   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.030391   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.030471   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.031008   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.031058   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.032392   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:05.032763   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.032799   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.033787   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33649
	I1211 23:35:05.046041   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I1211 23:35:05.046759   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.051722   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I1211 23:35:05.051734   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I1211 23:35:05.051856   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.051875   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.052252   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.052378   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.052387   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.053031   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.053056   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.053357   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.053377   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.053444   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.053664   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I1211 23:35:05.053696   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.053664   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.054020   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.054061   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.054370   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.054415   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.054636   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.055135   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.055158   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.055534   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.056875   94369 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-021354"
	I1211 23:35:05.056921   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:05.057287   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.057327   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.058516   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1211 23:35:05.058943   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.059050   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I1211 23:35:05.059466   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.059489   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.059532   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.060293   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.060335   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.060662   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.060682   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.060760   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46723
	I1211 23:35:05.061051   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.061210   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.061633   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.061662   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.062300   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.062319   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.065654   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37891
	I1211 23:35:05.066526   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.068493   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.069106   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.069155   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.072221   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.072276   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.072581   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.073211   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.073232   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.073641   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.073907   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.074306   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.074338   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.074488   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.074503   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.074895   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.075453   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.075489   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.092020   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I1211 23:35:05.092128   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I1211 23:35:05.092470   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.093092   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.093130   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.093204   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.093620   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.094243   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.094305   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.094558   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I1211 23:35:05.094909   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I1211 23:35:05.095093   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.095603   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.095621   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.095682   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.096028   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.096248   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.096263   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.096490   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.096621   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.096633   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.097047   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.097606   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.097673   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.098037   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43655
	I1211 23:35:05.098752   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.100016   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1211 23:35:05.100657   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.100671   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.100764   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43051
	I1211 23:35:05.101154   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.101242   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1211 23:35:05.101412   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.101431   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.101645   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.101662   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.101815   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.102096   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.102134   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.102223   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.102269   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.102822   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:05.102862   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:05.102871   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1211 23:35:05.102891   94369 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1211 23:35:05.102916   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.102999   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.103195   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.103376   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.105170   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.105872   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35269
	I1211 23:35:05.106270   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.106680   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.107138   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.107156   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.107484   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.107669   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.108306   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.109756   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.110047   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.110277   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.110413   94369 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1211 23:35:05.110533   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.110738   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.110883   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.111002   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.111639   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1211 23:35:05.111815   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40213
	I1211 23:35:05.111824   94369 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:35:05.111839   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1211 23:35:05.111857   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.112322   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.113148   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.113166   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.113701   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.114239   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.115442   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1211 23:35:05.115604   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.116293   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.116373   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.116387   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.116504   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.116728   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.117199   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.117885   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.118037   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.118462   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1211 23:35:05.119624   94369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:35:05.119638   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38107
	I1211 23:35:05.120131   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.120661   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.120682   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.121035   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.121120   94369 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:35:05.121143   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:35:05.121166   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.121227   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.121624   94369 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1211 23:35:05.122155   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1211 23:35:05.123836   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44279
	I1211 23:35:05.124273   94369 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:35:05.124291   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1211 23:35:05.124310   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.125147   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I1211 23:35:05.125473   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.125743   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.126160   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.126187   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.126239   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.126396   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1211 23:35:05.126627   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.126696   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.127210   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.127641   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.127717   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.127719   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.127737   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.128026   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.128193   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.128220   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.128379   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.128643   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.129969   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1211 23:35:05.130160   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.130258   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.130822   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.130847   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.131068   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.131210   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.131278   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.131437   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.131732   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.131758   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.132548   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:35:05.133487   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1211 23:35:05.134861   94369 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1211 23:35:05.134934   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1211 23:35:05.134938   94369 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1211 23:35:05.134970   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41355
	I1211 23:35:05.135085   94369 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1211 23:35:05.135605   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.136242   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.136261   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.136293   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1211 23:35:05.136751   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.136826   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.136905   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.137087   94369 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:35:05.137105   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1211 23:35:05.137123   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.137176   94369 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1211 23:35:05.137185   94369 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1211 23:35:05.137198   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.137244   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1211 23:35:05.137255   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1211 23:35:05.137269   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.137311   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.137326   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.138301   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.138605   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.138725   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:35:05.140513   94369 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:35:05.140540   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1211 23:35:05.140561   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.140786   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.140828   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1211 23:35:05.141377   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.142385   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.142443   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.142496   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.142522   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.143320   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.143339   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143638   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.143378   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:05.143695   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:05.143401   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143717   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.143415   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42039
	I1211 23:35:05.143431   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.143869   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143682   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.143889   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.143953   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:05.143966   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:05.143974   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.143980   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:05.143989   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:05.143996   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:05.144205   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.144228   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.144279   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.144281   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.144329   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.144405   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.144413   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:05.144426   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	W1211 23:35:05.144533   94369 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1211 23:35:05.144823   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.145177   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.145193   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.145312   94369 out.go:177]   - Using image docker.io/registry:2.8.3
	I1211 23:35:05.145553   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.145615   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.145628   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.145829   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.146035   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.146219   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.146330   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.146431   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.147418   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1211 23:35:05.147730   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.147900   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.148284   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.148426   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.148585   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.149036   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.149130   94369 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1211 23:35:05.149248   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.150025   94369 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1211 23:35:05.150025   94369 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1211 23:35:05.150904   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.151051   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.151075   94369 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1211 23:35:05.151092   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1211 23:35:05.151111   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.151467   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.151481   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.151843   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.152131   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.152312   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.152405   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.152692   94369 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1211 23:35:05.152777   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1211 23:35:05.152791   94369 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1211 23:35:05.152834   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.154204   94369 out.go:177]   - Using image docker.io/busybox:stable
	I1211 23:35:05.154208   94369 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1211 23:35:05.154285   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1211 23:35:05.154307   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.154855   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.155709   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.155743   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.155871   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.156178   94369 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:35:05.156194   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1211 23:35:05.156212   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.156366   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.156380   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.156542   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.156747   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.157366   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.157393   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.157529   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.157702   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.157929   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.157937   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I1211 23:35:05.158351   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.158541   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.158796   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.159332   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.159358   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.159205   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.159374   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.159421   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.159589   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.159753   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.159903   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.160893   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37471
	I1211 23:35:05.161350   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:05.161842   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:05.161861   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:05.161863   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.162134   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.162304   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.162443   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.162462   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.162963   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.163120   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.163272   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.163408   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.163564   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:05.163749   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:05.164708   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.165378   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:05.165571   94369 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:35:05.165589   94369 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:35:05.165605   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.166808   94369 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1211 23:35:05.168289   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1211 23:35:05.168306   94369 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1211 23:35:05.168322   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:05.168431   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.168745   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.168773   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.168998   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.169220   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.169436   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.169675   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	W1211 23:35:05.170412   94369 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:51768->192.168.39.225:22: read: connection reset by peer
	I1211 23:35:05.170446   94369 retry.go:31] will retry after 211.284806ms: ssh: handshake failed: read tcp 192.168.39.1:51768->192.168.39.225:22: read: connection reset by peer
	I1211 23:35:05.171284   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.171780   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:05.171843   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:05.171968   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:05.172113   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:05.172226   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:05.172309   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:05.367782   94369 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:35:05.368078   94369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:35:05.415466   94369 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1211 23:35:05.415495   94369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1211 23:35:05.424780   94369 node_ready.go:35] waiting up to 6m0s for node "addons-021354" to be "Ready" ...
	I1211 23:35:05.429344   94369 node_ready.go:49] node "addons-021354" has status "Ready":"True"
	I1211 23:35:05.429386   94369 node_ready.go:38] duration metric: took 4.553158ms for node "addons-021354" to be "Ready" ...
	I1211 23:35:05.429396   94369 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:35:05.442063   94369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:05.480046   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:35:05.494524   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1211 23:35:05.524327   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1211 23:35:05.546019   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1211 23:35:05.546050   94369 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1211 23:35:05.547370   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1211 23:35:05.560783   94369 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1211 23:35:05.560811   94369 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1211 23:35:05.563771   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1211 23:35:05.563790   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1211 23:35:05.582267   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1211 23:35:05.596439   94369 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1211 23:35:05.596463   94369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1211 23:35:05.598688   94369 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:35:05.598705   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1211 23:35:05.605369   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1211 23:35:05.605386   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1211 23:35:05.611794   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1211 23:35:05.630426   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1211 23:35:05.704133   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1211 23:35:05.704159   94369 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1211 23:35:05.725417   94369 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:35:05.725456   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1211 23:35:05.742894   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1211 23:35:05.742932   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1211 23:35:05.745675   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1211 23:35:05.747898   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:35:05.775618   94369 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1211 23:35:05.775649   94369 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1211 23:35:05.791719   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1211 23:35:05.791750   94369 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1211 23:35:05.899857   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1211 23:35:05.899887   94369 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1211 23:35:05.923894   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1211 23:35:05.951683   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1211 23:35:05.951718   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1211 23:35:05.996895   94369 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:35:05.996931   94369 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1211 23:35:06.005814   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1211 23:35:06.005842   94369 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1211 23:35:06.090409   94369 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:35:06.090445   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1211 23:35:06.152310   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1211 23:35:06.152353   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1211 23:35:06.262086   94369 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:35:06.262141   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1211 23:35:06.327942   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1211 23:35:06.334648   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1211 23:35:06.503690   94369 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1211 23:35:06.503730   94369 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1211 23:35:06.568045   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:35:06.808414   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1211 23:35:06.808448   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1211 23:35:07.250265   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1211 23:35:07.250309   94369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1211 23:35:07.446300   94369 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.078178099s)
	I1211 23:35:07.446333   94369 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:35:07.450296   94369 pod_ready.go:103] pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace has status "Ready":"False"
	I1211 23:35:07.731948   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1211 23:35:07.731972   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1211 23:35:07.998188   94369 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-021354" context rescaled to 1 replicas
	I1211 23:35:08.018806   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1211 23:35:08.018832   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1211 23:35:08.280221   94369 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:35:08.280252   94369 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1211 23:35:08.641703   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1211 23:35:09.604176   94369 pod_ready.go:103] pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace has status "Ready":"False"
	I1211 23:35:09.977101   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.497016735s)
	I1211 23:35:09.977170   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:09.977184   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:09.977542   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:09.977566   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:09.977582   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:09.977595   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:09.977860   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:09.977878   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:09.977897   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:12.083532   94369 pod_ready.go:93] pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.083620   94369 pod_ready.go:82] duration metric: took 6.64147023s for pod "coredns-7c65d6cfc9-ctjgq" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.083645   94369 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zqjkl" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.127453   94369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1211 23:35:12.127497   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:12.130490   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.130885   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:12.130907   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.131143   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:12.131315   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:12.131512   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:12.131684   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:12.134407   94369 pod_ready.go:93] pod "coredns-7c65d6cfc9-zqjkl" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.134428   94369 pod_ready.go:82] duration metric: took 50.774481ms for pod "coredns-7c65d6cfc9-zqjkl" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.134442   94369 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.147432   94369 pod_ready.go:93] pod "etcd-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.147458   94369 pod_ready.go:82] duration metric: took 13.00608ms for pod "etcd-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.147472   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.189516   94369 pod_ready.go:93] pod "kube-apiserver-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.189542   94369 pod_ready.go:82] duration metric: took 42.061425ms for pod "kube-apiserver-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.189556   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.199843   94369 pod_ready.go:93] pod "kube-controller-manager-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.199868   94369 pod_ready.go:82] duration metric: took 10.301991ms for pod "kube-controller-manager-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.199881   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nkpsm" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.347570   94369 pod_ready.go:93] pod "kube-proxy-nkpsm" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.347605   94369 pod_ready.go:82] duration metric: took 147.716679ms for pod "kube-proxy-nkpsm" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.347618   94369 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.563095   94369 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1211 23:35:12.629699   94369 addons.go:234] Setting addon gcp-auth=true in "addons-021354"
	I1211 23:35:12.629757   94369 host.go:66] Checking if "addons-021354" exists ...
	I1211 23:35:12.630063   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:12.630104   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:12.646065   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I1211 23:35:12.646526   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:12.647128   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:12.647158   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:12.647583   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:12.648291   94369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:35:12.648350   94369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:35:12.663711   94369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40925
	I1211 23:35:12.664159   94369 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:35:12.664708   94369 main.go:141] libmachine: Using API Version  1
	I1211 23:35:12.664740   94369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:35:12.665081   94369 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:35:12.665313   94369 main.go:141] libmachine: (addons-021354) Calling .GetState
	I1211 23:35:12.667081   94369 main.go:141] libmachine: (addons-021354) Calling .DriverName
	I1211 23:35:12.667314   94369 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1211 23:35:12.667346   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHHostname
	I1211 23:35:12.670656   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.671141   94369 main.go:141] libmachine: (addons-021354) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:1d:ff", ip: ""} in network mk-addons-021354: {Iface:virbr1 ExpiryTime:2024-12-12 00:34:33 +0000 UTC Type:0 Mac:52:54:00:f7:1d:ff Iaid: IPaddr:192.168.39.225 Prefix:24 Hostname:addons-021354 Clientid:01:52:54:00:f7:1d:ff}
	I1211 23:35:12.671175   94369 main.go:141] libmachine: (addons-021354) DBG | domain addons-021354 has defined IP address 192.168.39.225 and MAC address 52:54:00:f7:1d:ff in network mk-addons-021354
	I1211 23:35:12.671333   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHPort
	I1211 23:35:12.671541   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHKeyPath
	I1211 23:35:12.671742   94369 main.go:141] libmachine: (addons-021354) Calling .GetSSHUsername
	I1211 23:35:12.671904   94369 sshutil.go:53] new ssh client: &{IP:192.168.39.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/addons-021354/id_rsa Username:docker}
	I1211 23:35:12.752620   94369 pod_ready.go:93] pod "kube-scheduler-addons-021354" in "kube-system" namespace has status "Ready":"True"
	I1211 23:35:12.752649   94369 pod_ready.go:82] duration metric: took 405.02273ms for pod "kube-scheduler-addons-021354" in "kube-system" namespace to be "Ready" ...
	I1211 23:35:12.752660   94369 pod_ready.go:39] duration metric: took 7.32325392s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1211 23:35:12.752682   94369 api_server.go:52] waiting for apiserver process to appear ...
	I1211 23:35:12.752768   94369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1211 23:35:14.379115   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.884550202s)
	I1211 23:35:14.379196   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379211   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379212   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.854846519s)
	I1211 23:35:14.379262   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379286   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379311   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.831918968s)
	I1211 23:35:14.379343   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379353   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379416   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.797109646s)
	I1211 23:35:14.379450   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379457   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.749003138s)
	I1211 23:35:14.379464   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379474   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379482   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379425   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.767606537s)
	I1211 23:35:14.379515   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379524   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379558   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379574   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379582   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379585   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.633887714s)
	I1211 23:35:14.379615   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379616   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379635   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379648   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.631734559s)
	I1211 23:35:14.379670   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379678   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379692   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379700   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379589   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379723   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379726   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379734   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379741   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379711   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379759   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379765   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379773   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.455848228s)
	I1211 23:35:14.379748   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379793   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379802   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379854   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.379883   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.379891   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.051919528s)
	I1211 23:35:14.379893   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.379904   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379906   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.379910   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.379913   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380008   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.045326593s)
	I1211 23:35:14.380034   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380042   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380169   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.81209319s)
	W1211 23:35:14.380197   94369 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:35:14.380219   94369 retry.go:31] will retry after 296.729862ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1211 23:35:14.380272   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380281   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380289   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380296   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380351   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380371   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380377   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380384   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380390   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380426   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380443   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380449   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380456   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380462   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380499   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380517   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380523   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380530   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380536   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380575   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380590   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.380606   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380612   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380619   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.380625   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.380662   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.380670   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.380680   94369 addons.go:475] Verifying addon ingress=true in "addons-021354"
	I1211 23:35:14.381529   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.381569   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.381576   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.381823   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.381854   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.381861   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.382532   94369 out.go:177] * Verifying ingress addon...
	I1211 23:35:14.383570   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.383609   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.383910   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.383943   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.383950   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.385408   94369 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1211 23:35:14.385469   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.385504   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.385511   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.385520   94369 addons.go:475] Verifying addon registry=true in "addons-021354"
	I1211 23:35:14.385905   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.385943   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.385950   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386161   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.386193   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386199   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386238   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.386269   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386276   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386283   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.386290   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.386508   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386517   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386527   94369 addons.go:475] Verifying addon metrics-server=true in "addons-021354"
	I1211 23:35:14.386531   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.386567   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386575   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386877   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.386888   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.386897   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.386903   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.387150   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.387163   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.387266   94369 out.go:177] * Verifying registry addon...
	I1211 23:35:14.389216   94369 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-021354 service yakd-dashboard -n yakd-dashboard
	
	I1211 23:35:14.390012   94369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1211 23:35:14.436961   94369 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1211 23:35:14.436991   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:14.437304   94369 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1211 23:35:14.437320   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:14.470180   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.470209   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.470518   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.470539   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	W1211 23:35:14.470627   94369 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1211 23:35:14.483911   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:14.483932   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:14.484267   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:14.484290   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:14.484319   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:14.677595   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1211 23:35:14.896699   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:14.896949   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:15.280786   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.639011481s)
	I1211 23:35:15.280839   94369 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.528045224s)
	I1211 23:35:15.280879   94369 api_server.go:72] duration metric: took 10.289939885s to wait for apiserver process to appear ...
	I1211 23:35:15.280891   94369 api_server.go:88] waiting for apiserver healthz status ...
	I1211 23:35:15.280899   94369 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.613563089s)
	I1211 23:35:15.280909   94369 api_server.go:253] Checking apiserver healthz at https://192.168.39.225:8443/healthz ...
	I1211 23:35:15.280840   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:15.281131   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:15.281413   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:15.281430   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:15.281441   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:15.281448   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:15.282613   94369 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1211 23:35:15.283181   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:15.283197   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:15.283212   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:15.283238   94369 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-021354"
	I1211 23:35:15.284745   94369 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1211 23:35:15.284771   94369 out.go:177] * Verifying csi-hostpath-driver addon...
	I1211 23:35:15.286013   94369 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1211 23:35:15.286039   94369 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1211 23:35:15.286727   94369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1211 23:35:15.325778   94369 api_server.go:279] https://192.168.39.225:8443/healthz returned 200:
	ok
	I1211 23:35:15.340309   94369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1211 23:35:15.340341   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:15.354720   94369 api_server.go:141] control plane version: v1.31.2
	I1211 23:35:15.354761   94369 api_server.go:131] duration metric: took 73.863036ms to wait for apiserver health ...
	I1211 23:35:15.354774   94369 system_pods.go:43] waiting for kube-system pods to appear ...
	I1211 23:35:15.394871   94369 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1211 23:35:15.394898   94369 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1211 23:35:15.398092   94369 system_pods.go:59] 19 kube-system pods found
	I1211 23:35:15.398141   94369 system_pods.go:61] "amd-gpu-device-plugin-bh5l6" [dcd97a68-2e6d-4f42-8c52-855402d21e6c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:35:15.398158   94369 system_pods.go:61] "coredns-7c65d6cfc9-ctjgq" [28d6a423-c466-4a36-add7-9401b3318dad] Running
	I1211 23:35:15.398166   94369 system_pods.go:61] "coredns-7c65d6cfc9-zqjkl" [0dede579-c7ea-4553-b6b2-23f2a38c1cee] Running
	I1211 23:35:15.398172   94369 system_pods.go:61] "csi-hostpath-attacher-0" [c83f1e10-78d8-4652-9020-50342da3a576] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:35:15.398185   94369 system_pods.go:61] "csi-hostpath-resizer-0" [563bb0d7-c97d-410a-ac13-e968cbe6809f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:35:15.398195   94369 system_pods.go:61] "csi-hostpathplugin-bp9w7" [3b465037-83b0-4363-a2e2-16ebd3d3ac4f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:35:15.398203   94369 system_pods.go:61] "etcd-addons-021354" [23ea386f-3e06-41b9-b355-6feed882a434] Running
	I1211 23:35:15.398212   94369 system_pods.go:61] "kube-apiserver-addons-021354" [d0fd5365-ac43-4603-aee1-2ec157d58452] Running
	I1211 23:35:15.398218   94369 system_pods.go:61] "kube-controller-manager-addons-021354" [5c9f0c46-e7ee-490a-984d-fd2e80d8831b] Running
	I1211 23:35:15.398229   94369 system_pods.go:61] "kube-ingress-dns-minikube" [27c99b66-f43b-4ba8-b1e3-e20458576994] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:35:15.398240   94369 system_pods.go:61] "kube-proxy-nkpsm" [168a41ed-f854-4453-9157-1d3e444d4185] Running
	I1211 23:35:15.398246   94369 system_pods.go:61] "kube-scheduler-addons-021354" [b3b35e0d-4e6d-46b1-b771-d31c478524a7] Running
	I1211 23:35:15.398258   94369 system_pods.go:61] "metrics-server-84c5f94fbc-v42nk" [277fa5bf-2781-493c-86a5-d170dc8b9237] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:35:15.398272   94369 system_pods.go:61] "nvidia-device-plugin-daemonset-9qfkl" [fb3a5825-e9dc-42d8-ba09-f0d94c314d72] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:35:15.398284   94369 system_pods.go:61] "registry-5cc95cd69-9rj9b" [0eebcfc6-7414-4613-bf0e-42a424a43722] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:35:15.398296   94369 system_pods.go:61] "registry-proxy-x2lv7" [8128c544-09f7-4769-85c1-30a0a916ca57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:35:15.398307   94369 system_pods.go:61] "snapshot-controller-56fcc65765-gfjfb" [c3966cdf-e310-4ffa-9d98-70eccaabb23b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.398391   94369 system_pods.go:61] "snapshot-controller-56fcc65765-w2qfk" [9a5f87de-b239-4076-baa2-e6e98f3e018b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.398415   94369 system_pods.go:61] "storage-provisioner" [86997c22-05b1-4987-b8ee-d1d7a36a0ddf] Running
	I1211 23:35:15.398427   94369 system_pods.go:74] duration metric: took 43.641817ms to wait for pod list to return data ...
	I1211 23:35:15.398436   94369 default_sa.go:34] waiting for default service account to be created ...
	I1211 23:35:15.400400   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:15.414626   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:15.423705   94369 default_sa.go:45] found service account: "default"
	I1211 23:35:15.423733   94369 default_sa.go:55] duration metric: took 25.286742ms for default service account to be created ...
	I1211 23:35:15.423745   94369 system_pods.go:116] waiting for k8s-apps to be running ...
	I1211 23:35:15.436831   94369 system_pods.go:86] 19 kube-system pods found
	I1211 23:35:15.436862   94369 system_pods.go:89] "amd-gpu-device-plugin-bh5l6" [dcd97a68-2e6d-4f42-8c52-855402d21e6c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1211 23:35:15.436868   94369 system_pods.go:89] "coredns-7c65d6cfc9-ctjgq" [28d6a423-c466-4a36-add7-9401b3318dad] Running
	I1211 23:35:15.436876   94369 system_pods.go:89] "coredns-7c65d6cfc9-zqjkl" [0dede579-c7ea-4553-b6b2-23f2a38c1cee] Running
	I1211 23:35:15.436882   94369 system_pods.go:89] "csi-hostpath-attacher-0" [c83f1e10-78d8-4652-9020-50342da3a576] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1211 23:35:15.436887   94369 system_pods.go:89] "csi-hostpath-resizer-0" [563bb0d7-c97d-410a-ac13-e968cbe6809f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1211 23:35:15.436895   94369 system_pods.go:89] "csi-hostpathplugin-bp9w7" [3b465037-83b0-4363-a2e2-16ebd3d3ac4f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1211 23:35:15.436903   94369 system_pods.go:89] "etcd-addons-021354" [23ea386f-3e06-41b9-b355-6feed882a434] Running
	I1211 23:35:15.436908   94369 system_pods.go:89] "kube-apiserver-addons-021354" [d0fd5365-ac43-4603-aee1-2ec157d58452] Running
	I1211 23:35:15.436911   94369 system_pods.go:89] "kube-controller-manager-addons-021354" [5c9f0c46-e7ee-490a-984d-fd2e80d8831b] Running
	I1211 23:35:15.436922   94369 system_pods.go:89] "kube-ingress-dns-minikube" [27c99b66-f43b-4ba8-b1e3-e20458576994] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1211 23:35:15.436928   94369 system_pods.go:89] "kube-proxy-nkpsm" [168a41ed-f854-4453-9157-1d3e444d4185] Running
	I1211 23:35:15.436933   94369 system_pods.go:89] "kube-scheduler-addons-021354" [b3b35e0d-4e6d-46b1-b771-d31c478524a7] Running
	I1211 23:35:15.436940   94369 system_pods.go:89] "metrics-server-84c5f94fbc-v42nk" [277fa5bf-2781-493c-86a5-d170dc8b9237] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1211 23:35:15.436946   94369 system_pods.go:89] "nvidia-device-plugin-daemonset-9qfkl" [fb3a5825-e9dc-42d8-ba09-f0d94c314d72] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1211 23:35:15.436955   94369 system_pods.go:89] "registry-5cc95cd69-9rj9b" [0eebcfc6-7414-4613-bf0e-42a424a43722] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1211 23:35:15.436963   94369 system_pods.go:89] "registry-proxy-x2lv7" [8128c544-09f7-4769-85c1-30a0a916ca57] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1211 23:35:15.436971   94369 system_pods.go:89] "snapshot-controller-56fcc65765-gfjfb" [c3966cdf-e310-4ffa-9d98-70eccaabb23b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.436979   94369 system_pods.go:89] "snapshot-controller-56fcc65765-w2qfk" [9a5f87de-b239-4076-baa2-e6e98f3e018b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1211 23:35:15.436983   94369 system_pods.go:89] "storage-provisioner" [86997c22-05b1-4987-b8ee-d1d7a36a0ddf] Running
	I1211 23:35:15.436994   94369 system_pods.go:126] duration metric: took 13.242421ms to wait for k8s-apps to be running ...
	I1211 23:35:15.437004   94369 system_svc.go:44] waiting for kubelet service to be running ....
	I1211 23:35:15.437051   94369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1211 23:35:15.465899   94369 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:35:15.465919   94369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1211 23:35:15.540196   94369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1211 23:35:15.797763   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:15.892206   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:15.898772   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:16.291239   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:16.390224   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:16.393102   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:16.542040   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.864389048s)
	I1211 23:35:16.542112   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:16.542115   94369 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.105036482s)
	I1211 23:35:16.542150   94369 system_svc.go:56] duration metric: took 1.105140012s WaitForService to wait for kubelet
	I1211 23:35:16.542130   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:16.542168   94369 kubeadm.go:582] duration metric: took 11.551227162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:35:16.542198   94369 node_conditions.go:102] verifying NodePressure condition ...
	I1211 23:35:16.542552   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:16.542618   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:16.542637   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:16.542653   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:16.542666   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:16.542974   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:16.543002   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:16.543016   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:16.546203   94369 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1211 23:35:16.546224   94369 node_conditions.go:123] node cpu capacity is 2
	I1211 23:35:16.546253   94369 node_conditions.go:105] duration metric: took 4.046611ms to run NodePressure ...
	I1211 23:35:16.546265   94369 start.go:241] waiting for startup goroutines ...
	I1211 23:35:16.791426   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:16.897876   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:16.898463   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:17.130012   94369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.589766234s)
	I1211 23:35:17.130077   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:17.130094   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:17.130433   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:17.130461   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:17.130472   94369 main.go:141] libmachine: Making call to close driver server
	I1211 23:35:17.130481   94369 main.go:141] libmachine: (addons-021354) Calling .Close
	I1211 23:35:17.130479   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:17.130809   94369 main.go:141] libmachine: (addons-021354) DBG | Closing plugin on server side
	I1211 23:35:17.130885   94369 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:35:17.130900   94369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:35:17.131955   94369 addons.go:475] Verifying addon gcp-auth=true in "addons-021354"
	I1211 23:35:17.134288   94369 out.go:177] * Verifying gcp-auth addon...
	I1211 23:35:17.136645   94369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1211 23:35:17.148167   94369 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1211 23:35:17.148195   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:17.296717   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:17.389771   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:17.393486   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:17.641057   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:17.799473   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:17.890621   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:17.894938   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:18.140149   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:18.291774   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:18.402707   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:18.406829   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:18.641332   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:18.791851   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:18.889792   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:18.892658   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:19.140502   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:19.291423   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:19.390641   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:19.395503   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:19.646217   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:19.790905   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:19.889910   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:19.893062   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:20.281966   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:20.292115   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:20.390165   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:20.393213   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:20.640626   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:20.792140   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:20.890031   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:20.893017   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:21.141337   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:21.291902   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:21.389888   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:21.393033   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:21.640753   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:21.792280   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:21.890611   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:21.893498   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:22.140680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:22.291960   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:22.389934   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:22.393077   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:22.641259   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:22.792239   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:22.890165   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:22.893313   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:23.140301   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:23.292138   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:23.389645   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:23.393944   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:23.640139   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:23.792102   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:23.891360   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:23.893272   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:24.143108   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:24.291362   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:24.389619   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:24.393994   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:24.641509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:24.792669   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:24.890110   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:24.892708   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:25.140984   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:25.292249   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:25.391372   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:25.392961   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:25.640410   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:25.791839   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:25.890522   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:25.893617   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:26.140986   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:26.291584   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:26.389889   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:26.393406   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:26.640892   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:26.792459   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:26.889713   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:26.894043   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:27.140183   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:27.291734   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:27.390017   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:27.393909   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:27.640660   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:27.792353   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:27.890220   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:27.892974   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:28.140655   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:28.292414   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:28.390486   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:28.393508   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:28.641272   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:28.792082   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:28.890496   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:28.893257   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:29.140389   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:29.290876   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:29.390266   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:29.393367   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:29.640680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:29.792250   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:29.890746   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:29.893991   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:30.141845   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:30.292648   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:30.389885   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:30.393465   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:30.645889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:30.794674   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:30.889690   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:30.894049   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:31.141562   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:31.292575   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:31.390054   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:31.393268   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:31.640287   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:31.791001   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:31.890239   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:31.892814   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:32.140546   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:32.291940   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:32.389582   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:32.394002   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:32.644728   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:32.791649   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:32.889764   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:32.893766   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:33.140158   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:33.291283   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:33.389782   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:33.393533   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:33.640691   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:33.792178   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:33.890169   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:33.892984   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:34.140333   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:34.292959   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:34.389974   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:34.392883   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:34.639902   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:34.980731   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:34.981504   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:34.981775   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:35.140332   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:35.291378   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:35.389124   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:35.393141   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:35.640637   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:35.791997   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:35.890043   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:35.892981   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:36.139812   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:36.293332   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:36.390374   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:36.393334   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:36.640542   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:36.792446   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:36.890206   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:36.892768   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:37.141484   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:37.291502   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:37.389634   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:37.394212   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:37.640132   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:37.791652   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:37.890223   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:37.892971   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:38.140671   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:38.292406   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:38.391019   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:38.392893   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:38.641253   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:38.824725   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:39.144257   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:39.147509   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:39.147563   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:39.291979   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:39.393336   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:39.394049   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:39.639830   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:39.792106   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:39.890010   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:39.893040   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:40.139797   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:40.292281   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:40.389329   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:40.393669   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:40.640812   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:40.792017   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:40.890369   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:40.893008   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:41.139823   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:41.293979   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:41.390046   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:41.392885   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:41.640274   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:41.791269   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:41.892599   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:41.894014   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:42.140620   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:42.292401   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:42.389246   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:42.393889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:42.640015   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:42.791153   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:42.891219   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:42.894388   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:43.139957   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:43.291682   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:43.391024   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:43.491538   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:43.640783   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:43.792280   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:43.890416   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:43.893203   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:44.140786   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:44.293014   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:44.390444   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:44.393079   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:44.640616   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:44.792229   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:44.891510   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:44.892929   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:45.140829   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:45.294311   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:45.391389   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:45.394004   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:45.640691   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:45.792680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:45.890007   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:45.893307   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:46.140882   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:46.292545   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:46.389749   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:46.393548   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:46.641214   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:46.791906   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:46.890366   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:46.893300   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:47.140895   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:47.293414   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:47.389415   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:47.393463   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:47.640550   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:47.791502   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:47.889795   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:47.892858   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:48.140837   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:48.292371   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:48.390389   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:48.393147   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:48.640680   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:48.793014   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:48.890982   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:48.893879   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:49.140164   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:49.292071   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:49.390606   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:49.394106   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:49.640598   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:49.792352   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:49.890996   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:49.893723   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:50.141566   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:50.292331   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:50.390337   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:50.393558   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:50.641275   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:50.791119   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:50.891423   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:50.895690   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:51.141495   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:51.291656   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:51.389803   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:51.393066   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:51.640865   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:51.791686   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:51.889695   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:51.893839   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:52.142023   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:52.291873   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:52.389890   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:52.393978   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:52.640952   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:52.793098   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:52.896085   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:52.897096   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:53.141117   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:53.291506   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:53.389879   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:53.392871   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:53.640254   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:53.791240   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:53.891453   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:53.893410   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:54.140189   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:54.293468   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:54.389575   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:54.393676   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:54.641098   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:54.792384   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:54.889483   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:54.894615   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:55.140700   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:55.291677   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:55.389695   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:55.394091   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:55.640249   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:55.791268   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:55.890019   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:55.893993   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:56.140143   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:56.291899   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:56.389372   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:56.393487   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:56.641066   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:56.791906   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:56.891472   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:56.893111   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:57.140454   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:57.292200   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:57.390297   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:57.393198   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:57.639925   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:57.791696   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:57.889786   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:57.892874   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:58.140937   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:58.311619   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:58.389776   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:58.394079   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:58.640349   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:58.791822   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:58.890434   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:58.893766   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:59.140688   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:59.292520   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:59.389285   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:59.393354   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:35:59.640560   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:35:59.792042   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:35:59.890630   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:35:59.893683   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:00.141004   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:00.291829   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:00.390030   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:00.392997   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:00.640974   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:00.791317   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:00.890096   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:00.893456   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:01.141025   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:01.291167   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:01.390591   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:01.393286   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:01.640304   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:01.791994   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:01.890526   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:01.893252   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:02.141456   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:02.292100   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:02.390578   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:02.395016   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:02.639965   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:02.791051   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:02.889897   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:02.893689   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:03.140362   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:03.291141   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:03.390576   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:03.393256   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:03.640236   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:03.791889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:03.891690   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:03.893328   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1211 23:36:04.140326   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:04.291543   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:04.389938   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:04.393099   94369 kapi.go:107] duration metric: took 50.00308353s to wait for kubernetes.io/minikube-addons=registry ...
	I1211 23:36:04.640167   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:04.791893   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:04.890555   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:05.141320   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:05.293297   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:05.390771   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:05.640877   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:05.791933   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:05.890151   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:06.140707   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:06.292454   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:06.389656   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:06.640800   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:06.793052   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:06.889489   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:07.140567   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:07.291458   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:07.390893   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:07.640889   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:07.792286   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:07.889898   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:08.140859   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:08.292165   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:08.390078   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:08.639874   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:08.792813   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:08.889834   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:09.141054   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:09.291158   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:09.391512   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:09.640319   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:09.791795   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:09.889988   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:10.141518   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:10.291973   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:10.390321   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:10.640315   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:10.791292   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:10.889099   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:11.141039   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:11.293720   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:11.392592   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:11.641377   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:11.791548   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:11.890652   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:12.140554   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:12.292032   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:12.390670   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:12.641085   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:12.792129   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:12.891790   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:13.140278   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:13.291685   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:13.392295   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:13.640504   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:13.791440   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:13.890473   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:14.140348   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:14.291183   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:14.390245   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:14.640205   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:14.791452   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:14.890360   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:15.141910   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:15.293573   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:15.392504   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:15.641811   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:15.792427   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:15.891074   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:16.140688   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:16.291848   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:16.390284   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:16.640713   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:16.797129   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:16.891607   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:17.140402   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:17.291501   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:17.399328   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:17.641574   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:17.792052   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:17.889600   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:18.140136   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:18.292116   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:18.390082   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:18.641473   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:18.791963   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:18.890214   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:19.141001   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:19.291247   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:19.389980   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:19.641695   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:19.792254   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:19.891513   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:20.141069   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:20.291857   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:20.390239   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:20.640543   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:20.792762   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:20.889633   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:21.140084   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:21.291660   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:21.389864   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:21.641426   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:21.792077   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:22.059580   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:22.256481   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:22.291860   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:22.389722   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:22.640176   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:22.791514   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:22.892757   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:23.140443   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:23.291863   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:23.391239   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:23.640072   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:23.795723   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:23.889084   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:24.140793   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:24.293856   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:24.395752   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:24.640726   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:24.792721   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:24.890590   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:25.140765   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:25.292263   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:25.389459   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:25.640492   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:25.791857   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:25.891095   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:26.140920   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:26.292387   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:26.389681   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:26.640509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:26.792039   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:26.891910   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:27.140967   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:27.292600   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:27.390244   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:27.642276   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:27.792424   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:27.889292   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:28.140547   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:28.291406   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:28.389850   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:28.640364   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:28.830381   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:28.890616   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:29.140646   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:29.291617   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:29.393924   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:29.647979   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:29.792059   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:29.893795   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:30.140872   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:30.291904   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:30.390932   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:30.640415   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:30.794354   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:30.890061   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:31.140631   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:31.292125   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:31.391893   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:31.640735   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:31.791658   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:31.895704   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:32.146008   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:32.299451   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:32.392214   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:32.640856   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:32.792492   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:32.891324   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:33.140646   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:33.292170   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:33.390714   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:33.643873   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:33.792106   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:33.890534   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:34.140708   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:34.292090   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:34.390292   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:34.886360   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:34.887041   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:34.894285   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:35.140541   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:35.292208   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:35.391113   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:35.643970   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:35.791555   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:35.897238   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:36.141594   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:36.292467   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:36.392596   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:36.640789   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:36.792230   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:36.889546   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:37.141081   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:37.291200   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:37.393733   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:37.640413   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:37.791802   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:37.890379   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:38.140928   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:38.291827   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:38.391197   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:38.640017   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:38.791519   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:38.889959   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:39.140962   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:39.290890   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:39.390236   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:39.641894   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:39.792451   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1211 23:36:39.889931   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:40.141242   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:40.291804   94369 kapi.go:107] duration metric: took 1m25.005072108s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1211 23:36:40.390182   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:40.641540   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:40.890750   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:41.140735   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:41.390060   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:41.640682   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:41.891713   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:42.140064   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:42.390520   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:42.640356   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:42.890731   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:43.140509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:43.391018   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:43.641346   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:43.890889   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:44.141020   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:44.390652   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:44.640498   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:44.890882   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:45.143241   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:45.391186   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:45.641859   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:45.890204   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:46.139961   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:46.390625   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:46.640285   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:46.892020   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:47.141145   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:47.390829   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:47.641204   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:47.890875   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:48.140306   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:48.390059   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:48.641858   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:48.890289   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:49.140037   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:49.390527   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:49.641264   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:49.890725   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:50.140328   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:50.391426   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:50.640267   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:50.890334   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:51.139941   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:51.390939   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:51.640798   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:51.889872   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:52.140765   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:52.390817   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:52.640673   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:52.889837   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:53.141153   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:53.390687   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:53.641011   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:53.893854   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:54.141148   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:54.390679   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:54.640544   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:54.890966   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:55.140719   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:55.390276   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:55.639742   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:55.889873   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:56.141924   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:56.392036   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:56.640734   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:56.889752   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:57.140546   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:57.391136   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:57.640663   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:57.889930   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:58.140630   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:58.389747   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:58.640560   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:58.892325   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:59.141089   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:59.390842   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:36:59.642171   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:36:59.890843   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:00.140695   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:00.390037   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:00.641627   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:00.889509   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:01.140843   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:01.390298   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:01.640469   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:01.891017   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:02.140494   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:02.389730   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:02.640706   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:02.890645   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:03.141509   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:03.390746   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:03.641122   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:03.890652   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:04.141131   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:04.390131   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:04.640366   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:04.890145   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:05.140570   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:05.390767   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:05.641066   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:05.890458   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:06.140013   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:06.392343   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:06.641382   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:06.890566   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:07.140348   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:07.390711   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:07.640557   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:07.890898   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:08.141236   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:08.390578   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:08.640377   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:08.890844   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:09.140851   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:09.389789   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:09.641553   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:09.889600   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:10.140529   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:10.390668   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:10.640449   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:11.202316   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:11.202874   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:11.390707   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:11.641387   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:11.891574   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:12.141446   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:12.390990   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:12.640012   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:12.890133   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:13.141101   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:13.390323   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:13.640102   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:13.890719   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:14.140572   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:14.391117   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:14.640622   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:14.889956   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:15.140966   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:15.390477   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:15.640618   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:15.890888   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:16.140552   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:16.390485   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:16.640117   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:16.890540   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:17.140560   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:17.390736   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:17.640287   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:17.891143   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:18.140392   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:18.390499   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:18.641268   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:18.891086   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:19.141144   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:19.390376   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:19.640430   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:19.890926   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:20.141104   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:20.390280   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:20.640037   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:20.890396   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:21.140520   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:21.391058   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:21.641678   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:21.891475   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:22.140656   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:22.393665   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:22.640687   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:22.889710   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:23.139720   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:23.389730   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:23.640385   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:23.890890   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:24.140717   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:24.389880   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:24.641213   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:24.891481   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:25.140464   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:25.391527   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:25.641375   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:25.890954   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:26.140777   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:26.390656   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:26.640860   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:26.890447   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:27.140492   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:27.390573   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:27.640652   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:27.890005   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:28.141630   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:28.389859   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:28.641472   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:28.891387   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:29.140306   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:29.390860   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:29.640897   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:29.891968   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:30.141790   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:30.390547   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:30.640453   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:30.890831   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:31.140324   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:31.390591   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:31.641264   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:31.893895   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:32.141900   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:32.389830   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:32.640020   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:32.891240   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:33.140666   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:33.389775   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:33.640229   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:34.202289   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:34.202458   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:34.391027   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:34.640298   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:34.890655   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:35.140584   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:35.391795   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:35.642295   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:35.891064   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:36.140526   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:36.794507   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:36.794932   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:36.892633   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:37.141420   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:37.391327   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:37.640396   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:37.890912   94369 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1211 23:37:38.141199   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:38.390699   94369 kapi.go:107] duration metric: took 2m24.0052881s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1211 23:37:38.640703   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:39.140754   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:39.640012   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:40.140315   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:40.642150   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:41.142085   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:41.640257   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:42.141418   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:42.642242   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:43.140821   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:43.640059   94369 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1211 23:37:44.140524   94369 kapi.go:107] duration metric: took 2m27.003897373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1211 23:37:44.142133   94369 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-021354 cluster.
	I1211 23:37:44.143501   94369 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1211 23:37:44.144831   94369 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1211 23:37:44.146271   94369 out.go:177] * Enabled addons: storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, inspektor-gadget, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1211 23:37:44.147493   94369 addons.go:510] duration metric: took 2m39.156498942s for enable addons: enabled=[storage-provisioner amd-gpu-device-plugin nvidia-device-plugin inspektor-gadget cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1211 23:37:44.147533   94369 start.go:246] waiting for cluster config update ...
	I1211 23:37:44.147556   94369 start.go:255] writing updated cluster config ...
	I1211 23:37:44.147878   94369 ssh_runner.go:195] Run: rm -f paused
	I1211 23:37:44.205326   94369 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1211 23:37:44.207060   94369 out.go:177] * Done! kubectl is now configured to use "addons-021354" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.747042181Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960646747018242,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bf09ad5-1cde-4e03-b858-c387a582ce71 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.748852590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe65b766-0f86-4639-97ac-83671f645987 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.748910795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe65b766-0f86-4639-97ac-83671f645987 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.749278680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:005ac7754a4f85035dcd2682227c43c4bcaaf82cd985c26c59ffcb95af37b1a6,PodSandboxId:90544d25dc8cc8ba8818ff9b88681cd0bc24cd3e3fdafbe6009e3856c1cb304a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733960468838977798,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8b2cl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6f30442-8f8d-47bb-83a7-e35c5f879569,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a
36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c26748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173396011
0191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe65b766-0f86-4639-97ac-83671f645987 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.787096740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd8238a5-530a-4eb4-b1fa-92387ac8788d name=/runtime.v1.RuntimeService/Version
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.787282320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd8238a5-530a-4eb4-b1fa-92387ac8788d name=/runtime.v1.RuntimeService/Version
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.788171928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9606c45d-3c2d-414f-b4b4-8b9b7cedfa5f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.789471633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960646789447524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9606c45d-3c2d-414f-b4b4-8b9b7cedfa5f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.789959236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03fe042b-1ee4-47c7-83c4-e55a185e2f55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.790038632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03fe042b-1ee4-47c7-83c4-e55a185e2f55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.790433522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:005ac7754a4f85035dcd2682227c43c4bcaaf82cd985c26c59ffcb95af37b1a6,PodSandboxId:90544d25dc8cc8ba8818ff9b88681cd0bc24cd3e3fdafbe6009e3856c1cb304a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733960468838977798,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8b2cl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6f30442-8f8d-47bb-83a7-e35c5f879569,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a
36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c26748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173396011
0191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03fe042b-1ee4-47c7-83c4-e55a185e2f55 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.826887633Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e549e7b5-9455-4a4a-871e-bbe217ca5564 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.826981559Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e549e7b5-9455-4a4a-871e-bbe217ca5564 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.828468899Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81dd8490-b04d-44a1-a61c-47c755459e4c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.829801317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960646829776325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81dd8490-b04d-44a1-a61c-47c755459e4c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.830400131Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f865a5f-913a-4901-940d-b47558641ba7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.830473291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f865a5f-913a-4901-940d-b47558641ba7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.830796616Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:005ac7754a4f85035dcd2682227c43c4bcaaf82cd985c26c59ffcb95af37b1a6,PodSandboxId:90544d25dc8cc8ba8818ff9b88681cd0bc24cd3e3fdafbe6009e3856c1cb304a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733960468838977798,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8b2cl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6f30442-8f8d-47bb-83a7-e35c5f879569,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a
36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c26748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173396011
0191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f865a5f-913a-4901-940d-b47558641ba7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.866882237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ae9f068-401e-4de8-9358-893608aacfe0 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.866972457Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ae9f068-401e-4de8-9358-893608aacfe0 name=/runtime.v1.RuntimeService/Version
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.868284053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9432cf65-983f-435e-9797-a1be624fd21d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.869561636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960646869534589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9432cf65-983f-435e-9797-a1be624fd21d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.870369266Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dfe2a6a-e9a8-4878-8fbe-c4786353fd03 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.870440724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dfe2a6a-e9a8-4878-8fbe-c4786353fd03 name=/runtime.v1.RuntimeService/ListContainers
	Dec 11 23:44:06 addons-021354 crio[659]: time="2024-12-11 23:44:06.870759334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:005ac7754a4f85035dcd2682227c43c4bcaaf82cd985c26c59ffcb95af37b1a6,PodSandboxId:90544d25dc8cc8ba8818ff9b88681cd0bc24cd3e3fdafbe6009e3856c1cb304a,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733960468838977798,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8b2cl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c6f30442-8f8d-47bb-83a7-e35c5f879569,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c8a7e2acd7265aa37a7a716bdb48d59846f009fec18d18f63460d6a412ed6b9,PodSandboxId:f44fc51622f81050ae72c9b3ff1845ce5923cb2dd9cc1d4104b8c206bb117770,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733960325434033919,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 264cded5-669e-4c91-a0aa-800234ac799a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d920673a1e830a704fcba21d58777c6eefac966c461616530df373e5177fe8b2,PodSandboxId:fc0769151650597b46fdcb3ac5d0efb89c9783b172906e5384e03e4886c9338a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733960272596070543,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5f37f102-2cd3-45d7-a
36e-58954eec3bcb,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79e8144c96f2b5267068573bf9ce48f07753d2c8abb7fdd4f929c88edcb85f85,PodSandboxId:6541effb8bb1c1dfeb04a4a4aad1c896e2c212dffa42d731a7c1aaed9d8b32da,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733960176793312543,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-v42nk,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 277fa5bf-2781-493c-86a5-d170dc8b9237,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a72f42e28b71fa5a39e90872a83d1bb9f5d045a3bdc10105b267eb50a636e85e,PodSandboxId:11067e9e56f14fe37fe06b06749aefa25ea33edc1462bae1bb5fc3270ae64a36,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733960167455984785,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-4rzfr,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 19598ba3-56e0-4552-a658-084d184b5ed0,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6860aab3fff86e85faeaed6ece3581b0e019402cdb41b3fb2ee44515455ee163,PodSandboxId:88896cc53547e1c6ed0a43c4a303e2e4617b9ea475b43abbd4e2fc81a923cf98,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733960143281494962,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-bh5l6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dcd97a68-2e6d-4f42-8c52-855402d21e6c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40b1897a55f24fb82118a636c26748e4b51ea902683b0e9fe5289033361bf6e1,PodSandboxId:691522df16e0207931430bb26401590c8d0e8b8b654433c3a50cf0393396ef42,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733960111332560576,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 86997c22-05b1-4987-b8ee-d1d7a36a0ddf,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7924fdda27f8c583d2adee7f082d8eb20ec95a14a88aa651b8ff3bf14a270bd3,PodSandboxId:636e68fcc4a0ede70ee6f96d82772b0e6d9b13771f907901f9f42d8491db67fb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173396011
0191072222,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-ctjgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28d6a423-c466-4a36-add7-9401b3318dad,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1,PodSandboxId:2ac84efc2c5fcebccb17d5870528646779d8fafa88eeb44e1373595220f911ed,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733960107018500473,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-nkpsm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 168a41ed-f854-4453-9157-1d3e444d4185,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde,PodSandboxId:24140bc84f0a5298706a7a93ef9a7d060dee59b542d4292d32320f6e899448d5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733960094317872695,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ff225982bdaab034ea125e47b66b68c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca,PodSandboxId:e876b66ef7de37c4031f14018d94ff2b47018d746a76d9fa99037bbff56e9c34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733960094364732659,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa5e6213caadcf4e71b2874b2c8f3150,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672,PodSandboxId:be44b1e012d09dba68dc335ed9c3ae7445ee0f21ee2889f04b1106e0a6d3199b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733960094287607930,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf626506ebe98d943792651346e8c82e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a,PodSandboxId:b14a081a3d9d875fe069c4e71f326a35a4fbbf4a454441bb859df6997efca650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1
b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733960094225589645,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021354,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f8a510befc909f02ab6a66cc801a1e10,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9dfe2a6a-e9a8-4878-8fbe-c4786353fd03 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	005ac7754a4f8       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   90544d25dc8cc       hello-world-app-55bf9c44b4-8b2cl
	2c8a7e2acd726       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         5 minutes ago       Running             nginx                     0                   f44fc51622f81       nginx
	d920673a1e830       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   fc07691516505       busybox
	79e8144c96f2b       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   7 minutes ago       Running             metrics-server            0                   6541effb8bb1c       metrics-server-84c5f94fbc-v42nk
	a72f42e28b71f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        7 minutes ago       Running             local-path-provisioner    0                   11067e9e56f14       local-path-provisioner-86d989889c-4rzfr
	6860aab3fff86       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                8 minutes ago       Running             amd-gpu-device-plugin     0                   88896cc53547e       amd-gpu-device-plugin-bh5l6
	40b1897a55f24       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   691522df16e02       storage-provisioner
	7924fdda27f8c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        8 minutes ago       Running             coredns                   0                   636e68fcc4a0e       coredns-7c65d6cfc9-ctjgq
	bd4a1fabc2629       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        8 minutes ago       Running             kube-proxy                0                   2ac84efc2c5fc       kube-proxy-nkpsm
	f757cdb5508ff       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        9 minutes ago       Running             kube-apiserver            0                   e876b66ef7de3       kube-apiserver-addons-021354
	579175421d814       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        9 minutes ago       Running             kube-scheduler            0                   24140bc84f0a5       kube-scheduler-addons-021354
	de7d5e893bc1b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago       Running             etcd                      0                   be44b1e012d09       etcd-addons-021354
	0b96198079cab       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        9 minutes ago       Running             kube-controller-manager   0                   b14a081a3d9d8       kube-controller-manager-addons-021354
	
	
	==> coredns [7924fdda27f8c583d2adee7f082d8eb20ec95a14a88aa651b8ff3bf14a270bd3] <==
	[INFO] 10.244.0.22:47946 - 60168 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000100761s
	[INFO] 10.244.0.22:36589 - 65453 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064892s
	[INFO] 10.244.0.22:47946 - 25272 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000109991s
	[INFO] 10.244.0.22:36589 - 30408 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061106s
	[INFO] 10.244.0.22:36589 - 9428 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000096344s
	[INFO] 10.244.0.22:47946 - 52131 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000091677s
	[INFO] 10.244.0.22:47946 - 30144 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125637s
	[INFO] 10.244.0.22:36589 - 10978 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049004s
	[INFO] 10.244.0.22:47946 - 24221 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000158371s
	[INFO] 10.244.0.22:36589 - 59330 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060201s
	[INFO] 10.244.0.22:36589 - 9073 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000088082s
	[INFO] 10.244.0.22:41993 - 12918 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000102316s
	[INFO] 10.244.0.22:43840 - 64874 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000127804s
	[INFO] 10.244.0.22:43840 - 6525 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000099048s
	[INFO] 10.244.0.22:43840 - 43664 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000119691s
	[INFO] 10.244.0.22:43840 - 64834 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000099267s
	[INFO] 10.244.0.22:43840 - 14974 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000093965s
	[INFO] 10.244.0.22:43840 - 31010 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061621s
	[INFO] 10.244.0.22:43840 - 39781 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098725s
	[INFO] 10.244.0.22:41993 - 57545 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000140948s
	[INFO] 10.244.0.22:41993 - 44117 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048946s
	[INFO] 10.244.0.22:41993 - 20107 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000047965s
	[INFO] 10.244.0.22:41993 - 61002 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005789s
	[INFO] 10.244.0.22:41993 - 16525 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000171559s
	[INFO] 10.244.0.22:41993 - 49234 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039723s
	
	
	==> describe nodes <==
	Name:               addons-021354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-021354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=addons-021354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_35_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-021354
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:34:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-021354
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 11 Dec 2024 23:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 11 Dec 2024 23:41:38 +0000   Wed, 11 Dec 2024 23:34:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 11 Dec 2024 23:41:38 +0000   Wed, 11 Dec 2024 23:34:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 11 Dec 2024 23:41:38 +0000   Wed, 11 Dec 2024 23:34:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 11 Dec 2024 23:41:38 +0000   Wed, 11 Dec 2024 23:35:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.225
	  Hostname:    addons-021354
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2313c7905d7240539c21a58738545990
	  System UUID:                2313c790-5d72-4053-9c21-a58738545990
	  Boot ID:                    4b52f08d-7f6b-4c06-8e3f-51e5db38dc4f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     hello-world-app-55bf9c44b4-8b2cl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 amd-gpu-device-plugin-bh5l6                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m
	  kube-system                 coredns-7c65d6cfc9-ctjgq                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m3s
	  kube-system                 etcd-addons-021354                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m8s
	  kube-system                 kube-apiserver-addons-021354               250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-controller-manager-addons-021354      200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-proxy-nkpsm                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m3s
	  kube-system                 kube-scheduler-addons-021354               100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 metrics-server-84c5f94fbc-v42nk            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m56s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m58s
	  local-path-storage          local-path-provisioner-86d989889c-4rzfr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m57s  kube-proxy       
	  Normal  Starting                 9m8s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m8s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m8s   kubelet          Node addons-021354 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m8s   kubelet          Node addons-021354 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m8s   kubelet          Node addons-021354 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m7s   kubelet          Node addons-021354 status is now: NodeReady
	  Normal  RegisteredNode           9m4s   node-controller  Node addons-021354 event: Registered Node addons-021354 in Controller
	
	
	==> dmesg <==
	[  +6.477393] systemd-fstab-generator[1204]: Ignoring "noauto" option for root device
	[  +0.075728] kauditd_printk_skb: 69 callbacks suppressed
	[Dec11 23:35] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.151719] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.004173] kauditd_printk_skb: 91 callbacks suppressed
	[  +5.230299] kauditd_printk_skb: 161 callbacks suppressed
	[  +7.444894] kauditd_printk_skb: 74 callbacks suppressed
	[Dec11 23:36] kauditd_printk_skb: 4 callbacks suppressed
	[ +19.180892] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.701789] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.221573] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.762541] kauditd_printk_skb: 23 callbacks suppressed
	[Dec11 23:37] kauditd_printk_skb: 15 callbacks suppressed
	[ +12.547293] kauditd_printk_skb: 9 callbacks suppressed
	[Dec11 23:38] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.687421] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.778038] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.066388] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.055297] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.608612] kauditd_printk_skb: 30 callbacks suppressed
	[  +7.177161] kauditd_printk_skb: 19 callbacks suppressed
	[ +12.732427] kauditd_printk_skb: 2 callbacks suppressed
	[Dec11 23:39] kauditd_printk_skb: 7 callbacks suppressed
	[Dec11 23:41] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.470459] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [de7d5e893bc1b71899ac10f020f57955b40d41bbf3d672494f812000256ce672] <==
	{"level":"warn","ts":"2024-12-11T23:37:36.758671Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"466.976381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-12-11T23:37:36.760408Z","caller":"traceutil/trace.go:171","msg":"trace[2113477679] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1195; }","duration":"467.687272ms","start":"2024-12-11T23:37:36.291675Z","end":"2024-12-11T23:37:36.759363Z","steps":["trace[2113477679] 'range keys from in-memory index tree'  (duration: 466.866ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:37:36.760559Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:37:36.291640Z","time spent":"468.903616ms","remote":"127.0.0.1:37384","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":576,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2024-12-11T23:37:36.759194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.729286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:37:36.760835Z","caller":"traceutil/trace.go:171","msg":"trace[683057435] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1195; }","duration":"400.378347ms","start":"2024-12-11T23:37:36.360448Z","end":"2024-12-11T23:37:36.760826Z","steps":["trace[683057435] 'range keys from in-memory index tree'  (duration: 398.676873ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:37:36.760899Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:37:36.360413Z","time spent":"400.445579ms","remote":"127.0.0.1:37308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-12-11T23:37:36.761344Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.436379ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:37:36.762717Z","caller":"traceutil/trace.go:171","msg":"trace[672734179] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1195; }","duration":"150.807428ms","start":"2024-12-11T23:37:36.611900Z","end":"2024-12-11T23:37:36.762708Z","steps":["trace[672734179] 'range keys from in-memory index tree'  (duration: 149.396911ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:38:14.105535Z","caller":"traceutil/trace.go:171","msg":"trace[438081125] linearizableReadLoop","detail":"{readStateIndex:1434; appliedIndex:1433; }","duration":"183.133847ms","start":"2024-12-11T23:38:13.922378Z","end":"2024-12-11T23:38:14.105512Z","steps":["trace[438081125] 'read index received'  (duration: 182.94168ms)","trace[438081125] 'applied index is now lower than readState.Index'  (duration: 191.714µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:38:14.105948Z","caller":"traceutil/trace.go:171","msg":"trace[1415719321] transaction","detail":"{read_only:false; response_revision:1376; number_of_response:1; }","duration":"218.505569ms","start":"2024-12-11T23:38:13.887431Z","end":"2024-12-11T23:38:14.105937Z","steps":["trace[1415719321] 'process raft request'  (duration: 217.92654ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:14.106100Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.721128ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:38:14.106124Z","caller":"traceutil/trace.go:171","msg":"trace[385756639] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1376; }","duration":"183.757634ms","start":"2024-12-11T23:38:13.922356Z","end":"2024-12-11T23:38:14.106114Z","steps":["trace[385756639] 'agreement among raft nodes before linearized reading'  (duration: 183.700343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:14.107252Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.71252ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:38:14.107284Z","caller":"traceutil/trace.go:171","msg":"trace[1205703228] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1376; }","duration":"152.796306ms","start":"2024-12-11T23:38:13.954480Z","end":"2024-12-11T23:38:14.107276Z","steps":["trace[1205703228] 'agreement among raft nodes before linearized reading'  (duration: 152.690427ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:38:32.036820Z","caller":"traceutil/trace.go:171","msg":"trace[711663538] linearizableReadLoop","detail":"{readStateIndex:1570; appliedIndex:1569; }","duration":"305.044897ms","start":"2024-12-11T23:38:31.731763Z","end":"2024-12-11T23:38:32.036808Z","steps":["trace[711663538] 'read index received'  (duration: 304.870642ms)","trace[711663538] 'applied index is now lower than readState.Index'  (duration: 173.748µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-11T23:38:32.037099Z","caller":"traceutil/trace.go:171","msg":"trace[1440514323] transaction","detail":"{read_only:false; response_revision:1507; number_of_response:1; }","duration":"315.50833ms","start":"2024-12-11T23:38:31.721580Z","end":"2024-12-11T23:38:32.037089Z","steps":["trace[1440514323] 'process raft request'  (duration: 315.13463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:32.037285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:38:31.721563Z","time spent":"315.601747ms","remote":"127.0.0.1:37308","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3606,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/test-local-path\" mod_revision:1505 > success:<request_put:<key:\"/registry/pods/default/test-local-path\" value_size:3560 >> failure:<request_range:<key:\"/registry/pods/default/test-local-path\" > >"}
	{"level":"warn","ts":"2024-12-11T23:38:32.037491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.724566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/test-local-path\" ","response":"range_response_count:1 size:3621"}
	{"level":"info","ts":"2024-12-11T23:38:32.037532Z","caller":"traceutil/trace.go:171","msg":"trace[785189000] range","detail":"{range_begin:/registry/pods/default/test-local-path; range_end:; response_count:1; response_revision:1507; }","duration":"305.765692ms","start":"2024-12-11T23:38:31.731758Z","end":"2024-12-11T23:38:32.037524Z","steps":["trace[785189000] 'agreement among raft nodes before linearized reading'  (duration: 305.701523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:32.037554Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-11T23:38:31.731718Z","time spent":"305.830284ms","remote":"127.0.0.1:37308","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3644,"request content":"key:\"/registry/pods/default/test-local-path\" "}
	{"level":"warn","ts":"2024-12-11T23:38:32.037677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.391983ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-12-11T23:38:32.037730Z","caller":"traceutil/trace.go:171","msg":"trace[405331215] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1507; }","duration":"157.452538ms","start":"2024-12-11T23:38:31.880269Z","end":"2024-12-11T23:38:32.037722Z","steps":["trace[405331215] 'agreement among raft nodes before linearized reading'  (duration: 157.315212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-11T23:38:32.037738Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.61111ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-11T23:38:32.038775Z","caller":"traceutil/trace.go:171","msg":"trace[1171929756] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1507; }","duration":"116.643367ms","start":"2024-12-11T23:38:31.922121Z","end":"2024-12-11T23:38:32.038765Z","steps":["trace[1171929756] 'agreement among raft nodes before linearized reading'  (duration: 115.603584ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-11T23:38:57.386137Z","caller":"traceutil/trace.go:171","msg":"trace[600661065] transaction","detail":"{read_only:false; response_revision:1659; number_of_response:1; }","duration":"286.39895ms","start":"2024-12-11T23:38:57.099721Z","end":"2024-12-11T23:38:57.386120Z","steps":["trace[600661065] 'process raft request'  (duration: 286.287039ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:44:07 up 9 min,  0 users,  load average: 0.12, 0.69, 0.58
	Linux addons-021354 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f757cdb5508ff773d99a72b6894ba1adaea01ec07f93489d8ed8d9d0b632b1ca] <==
	E1211 23:37:22.307646       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.104.253:443: connect: connection refused" logger="UnhandledError"
	E1211 23:37:22.318549       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.104.253:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.104.253:443: connect: connection refused" logger="UnhandledError"
	I1211 23:37:22.400583       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1211 23:37:59.727102       1 conn.go:339] Error on socket receive: read tcp 192.168.39.225:8443->192.168.39.1:34578: use of closed network connection
	E1211 23:37:59.939714       1 conn.go:339] Error on socket receive: read tcp 192.168.39.225:8443->192.168.39.1:34602: use of closed network connection
	I1211 23:38:09.091339       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.100.164"}
	I1211 23:38:38.114044       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1211 23:38:39.153376       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1211 23:38:40.163757       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1211 23:38:40.326516       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.241.176"}
	I1211 23:39:05.628974       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1211 23:39:29.120535       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.120783       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.140504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.140559       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.173737       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.173834       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.184438       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.184546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1211 23:39:29.206611       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1211 23:39:29.206665       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1211 23:39:30.177014       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1211 23:39:30.207480       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1211 23:39:30.235114       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1211 23:41:05.030200       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.105.125"}
	
	
	==> kube-controller-manager [0b96198079cabac6ff2f3f692ffa8e5953aa999f188edef550afa8a73547ad1a] <==
	E1211 23:41:59.633097       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:42:14.858592       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:42:14.858719       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:42:22.405945       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:42:22.406015       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:42:34.417567       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:42:34.417798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:42:45.837879       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:42:45.838037       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:42:59.644493       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:42:59.644568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:43:01.145602       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:43:01.145709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:43:06.330356       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:43:06.330471       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:43:23.997983       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:43:23.998262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:43:43.616440       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:43:43.616529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:43:46.328880       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:43:46.328921       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:43:53.861399       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:43:53.861528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1211 23:44:04.909953       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1211 23:44:04.910012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [bd4a1fabc26293cf3e4e37ef7f6c35f1760c182a5778300b86948db3f7d64be1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1211 23:35:09.464101       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1211 23:35:09.588258       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.225"]
	E1211 23:35:09.588355       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:35:10.096781       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1211 23:35:10.096824       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:35:10.096856       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:35:10.105408       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:35:10.105734       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:35:10.105747       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:35:10.109696       1 config.go:199] "Starting service config controller"
	I1211 23:35:10.109737       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:35:10.109772       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:35:10.109779       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:35:10.121806       1 config.go:328] "Starting node config controller"
	I1211 23:35:10.121822       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:35:10.210106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:35:10.210164       1 shared_informer.go:320] Caches are synced for service config
	I1211 23:35:10.222466       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [579175421d8144e2935676dc1171668415a815d02c219166b2fa6fa75a977cde] <==
	W1211 23:34:57.051463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:57.051492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.051544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:57.051556       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.863926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1211 23:34:57.864095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.885476       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1211 23:34:57.885620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.917754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1211 23:34:57.917919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:57.989726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1211 23:34:57.989850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.069069       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.069572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.106695       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.106827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.184398       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1211 23:34:58.184487       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1211 23:34:58.212527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1211 23:34:58.212623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.330871       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.332038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1211 23:34:58.331846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1211 23:34:58.332249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1211 23:35:01.140084       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 11 23:42:59 addons-021354 kubelet[1211]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 11 23:42:59 addons-021354 kubelet[1211]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 11 23:42:59 addons-021354 kubelet[1211]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 11 23:42:59 addons-021354 kubelet[1211]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 11 23:42:59 addons-021354 kubelet[1211]: E1211 23:42:59.983724    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960579983155000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:42:59 addons-021354 kubelet[1211]: E1211 23:42:59.983842    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960579983155000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:01 addons-021354 kubelet[1211]: I1211 23:43:01.470861    1211 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 11 23:43:09 addons-021354 kubelet[1211]: E1211 23:43:09.992506    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960589988116102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:09 addons-021354 kubelet[1211]: E1211 23:43:09.992964    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960589988116102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:19 addons-021354 kubelet[1211]: E1211 23:43:19.995061    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960599994826255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:19 addons-021354 kubelet[1211]: E1211 23:43:19.995086    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960599994826255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:30 addons-021354 kubelet[1211]: E1211 23:43:30.000282    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960609999512133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:30 addons-021354 kubelet[1211]: E1211 23:43:30.000324    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960609999512133,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:40 addons-021354 kubelet[1211]: E1211 23:43:40.003287    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960620002821910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:40 addons-021354 kubelet[1211]: E1211 23:43:40.003618    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960620002821910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:50 addons-021354 kubelet[1211]: E1211 23:43:50.007920    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960630007453713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:50 addons-021354 kubelet[1211]: E1211 23:43:50.008295    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960630007453713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:43:59 addons-021354 kubelet[1211]: E1211 23:43:59.486286    1211 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 11 23:43:59 addons-021354 kubelet[1211]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 11 23:43:59 addons-021354 kubelet[1211]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 11 23:43:59 addons-021354 kubelet[1211]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 11 23:43:59 addons-021354 kubelet[1211]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 11 23:44:00 addons-021354 kubelet[1211]: E1211 23:44:00.011342    1211 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960640010922862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:44:00 addons-021354 kubelet[1211]: E1211 23:44:00.011368    1211 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733960640010922862,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604540,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 11 23:44:02 addons-021354 kubelet[1211]: I1211 23:44:02.469182    1211 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [40b1897a55f24fb82118a636c26748e4b51ea902683b0e9fe5289033361bf6e1] <==
	I1211 23:35:12.318560       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1211 23:35:12.339534       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1211 23:35:12.339600       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1211 23:35:12.349555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1211 23:35:12.349698       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-021354_821de941-d5bc-4f30-b71e-a9a2b7db9d21!
	I1211 23:35:12.350335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d4f503c1-2c31-406f-b6bb-801542735018", APIVersion:"v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-021354_821de941-d5bc-4f30-b71e-a9a2b7db9d21 became leader
	I1211 23:35:12.451577       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-021354_821de941-d5bc-4f30-b71e-a9a2b7db9d21!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-021354 -n addons-021354
helpers_test.go:261: (dbg) Run:  kubectl --context addons-021354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (360.31s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-021354
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-021354: exit status 82 (2m0.481757893s)

                                                
                                                
-- stdout --
	* Stopping node "addons-021354"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-021354" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-021354
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-021354: exit status 11 (21.538357233s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-021354" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-021354
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-021354: exit status 11 (6.144220215s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-021354" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-021354
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-021354: exit status 11 (6.143715706s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-021354" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 node stop m02 -v=7 --alsologtostderr
E1212 00:04:09.688802   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:04:17.637231   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565823 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.460934345s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565823-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:03:37.469436  110209 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:03:37.469702  110209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:03:37.469747  110209 out.go:358] Setting ErrFile to fd 2...
	I1212 00:03:37.469785  110209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:03:37.470144  110209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:03:37.470427  110209 mustload.go:65] Loading cluster: ha-565823
	I1212 00:03:37.470858  110209 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:03:37.470879  110209 stop.go:39] StopHost: ha-565823-m02
	I1212 00:03:37.471392  110209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:03:37.471455  110209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:03:37.487007  110209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I1212 00:03:37.487523  110209 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:03:37.488104  110209 main.go:141] libmachine: Using API Version  1
	I1212 00:03:37.488126  110209 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:03:37.488447  110209 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:03:37.490533  110209 out.go:177] * Stopping node "ha-565823-m02"  ...
	I1212 00:03:37.491774  110209 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:03:37.491812  110209 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:03:37.492019  110209 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:03:37.492050  110209 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:03:37.495641  110209 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:03:37.496014  110209 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:03:37.496046  110209 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:03:37.496170  110209 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:03:37.496321  110209 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:03:37.496470  110209 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:03:37.496580  110209 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:03:37.582609  110209 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:03:37.640893  110209 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:03:37.679646  110209 main.go:141] libmachine: Stopping "ha-565823-m02"...
	I1212 00:03:37.679676  110209 main.go:141] libmachine: (ha-565823-m02) Calling .GetState
	I1212 00:03:37.681203  110209 main.go:141] libmachine: (ha-565823-m02) Calling .Stop
	I1212 00:03:37.684304  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 0/120
	I1212 00:03:38.685612  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 1/120
	I1212 00:03:39.687423  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 2/120
	I1212 00:03:40.688655  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 3/120
	I1212 00:03:41.690330  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 4/120
	I1212 00:03:42.692243  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 5/120
	I1212 00:03:43.693964  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 6/120
	I1212 00:03:44.695140  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 7/120
	I1212 00:03:45.696401  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 8/120
	I1212 00:03:46.698097  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 9/120
	I1212 00:03:47.700405  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 10/120
	I1212 00:03:48.702344  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 11/120
	I1212 00:03:49.703788  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 12/120
	I1212 00:03:50.706361  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 13/120
	I1212 00:03:51.708199  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 14/120
	I1212 00:03:52.710173  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 15/120
	I1212 00:03:53.711635  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 16/120
	I1212 00:03:54.712997  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 17/120
	I1212 00:03:55.715340  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 18/120
	I1212 00:03:56.716617  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 19/120
	I1212 00:03:57.718644  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 20/120
	I1212 00:03:58.719992  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 21/120
	I1212 00:03:59.722270  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 22/120
	I1212 00:04:00.724041  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 23/120
	I1212 00:04:01.725213  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 24/120
	I1212 00:04:02.727255  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 25/120
	I1212 00:04:03.728695  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 26/120
	I1212 00:04:04.730614  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 27/120
	I1212 00:04:05.731933  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 28/120
	I1212 00:04:06.733245  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 29/120
	I1212 00:04:07.735361  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 30/120
	I1212 00:04:08.736629  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 31/120
	I1212 00:04:09.738448  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 32/120
	I1212 00:04:10.739911  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 33/120
	I1212 00:04:11.742139  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 34/120
	I1212 00:04:12.744078  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 35/120
	I1212 00:04:13.745675  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 36/120
	I1212 00:04:14.747306  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 37/120
	I1212 00:04:15.748813  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 38/120
	I1212 00:04:16.750112  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 39/120
	I1212 00:04:17.751873  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 40/120
	I1212 00:04:18.753244  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 41/120
	I1212 00:04:19.754583  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 42/120
	I1212 00:04:20.755941  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 43/120
	I1212 00:04:21.758186  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 44/120
	I1212 00:04:22.760217  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 45/120
	I1212 00:04:23.761530  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 46/120
	I1212 00:04:24.762974  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 47/120
	I1212 00:04:25.764465  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 48/120
	I1212 00:04:26.766214  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 49/120
	I1212 00:04:27.768080  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 50/120
	I1212 00:04:28.769512  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 51/120
	I1212 00:04:29.770956  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 52/120
	I1212 00:04:30.772173  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 53/120
	I1212 00:04:31.773604  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 54/120
	I1212 00:04:32.775744  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 55/120
	I1212 00:04:33.777052  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 56/120
	I1212 00:04:34.779337  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 57/120
	I1212 00:04:35.781033  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 58/120
	I1212 00:04:36.782387  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 59/120
	I1212 00:04:37.784340  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 60/120
	I1212 00:04:38.786648  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 61/120
	I1212 00:04:39.787828  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 62/120
	I1212 00:04:40.789966  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 63/120
	I1212 00:04:41.791115  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 64/120
	I1212 00:04:42.793002  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 65/120
	I1212 00:04:43.794353  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 66/120
	I1212 00:04:44.795840  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 67/120
	I1212 00:04:45.797998  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 68/120
	I1212 00:04:46.799410  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 69/120
	I1212 00:04:47.801786  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 70/120
	I1212 00:04:48.803100  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 71/120
	I1212 00:04:49.805422  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 72/120
	I1212 00:04:50.806648  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 73/120
	I1212 00:04:51.808439  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 74/120
	I1212 00:04:52.810175  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 75/120
	I1212 00:04:53.811360  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 76/120
	I1212 00:04:54.812718  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 77/120
	I1212 00:04:55.814044  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 78/120
	I1212 00:04:56.815274  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 79/120
	I1212 00:04:57.817417  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 80/120
	I1212 00:04:58.819289  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 81/120
	I1212 00:04:59.820980  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 82/120
	I1212 00:05:00.822286  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 83/120
	I1212 00:05:01.823690  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 84/120
	I1212 00:05:02.825633  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 85/120
	I1212 00:05:03.827003  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 86/120
	I1212 00:05:04.828406  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 87/120
	I1212 00:05:05.829745  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 88/120
	I1212 00:05:06.831050  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 89/120
	I1212 00:05:07.833447  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 90/120
	I1212 00:05:08.834838  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 91/120
	I1212 00:05:09.836329  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 92/120
	I1212 00:05:10.837669  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 93/120
	I1212 00:05:11.838963  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 94/120
	I1212 00:05:12.840764  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 95/120
	I1212 00:05:13.842205  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 96/120
	I1212 00:05:14.843383  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 97/120
	I1212 00:05:15.845644  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 98/120
	I1212 00:05:16.847047  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 99/120
	I1212 00:05:17.849278  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 100/120
	I1212 00:05:18.850626  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 101/120
	I1212 00:05:19.852272  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 102/120
	I1212 00:05:20.853743  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 103/120
	I1212 00:05:21.855116  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 104/120
	I1212 00:05:22.856974  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 105/120
	I1212 00:05:23.858916  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 106/120
	I1212 00:05:24.860429  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 107/120
	I1212 00:05:25.861792  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 108/120
	I1212 00:05:26.863232  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 109/120
	I1212 00:05:27.865222  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 110/120
	I1212 00:05:28.866736  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 111/120
	I1212 00:05:29.869238  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 112/120
	I1212 00:05:30.870784  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 113/120
	I1212 00:05:31.872651  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 114/120
	I1212 00:05:32.874200  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 115/120
	I1212 00:05:33.875689  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 116/120
	I1212 00:05:34.876899  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 117/120
	I1212 00:05:35.878157  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 118/120
	I1212 00:05:36.879367  110209 main.go:141] libmachine: (ha-565823-m02) Waiting for machine to stop 119/120
	I1212 00:05:37.880874  110209 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1212 00:05:37.881023  110209 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-565823 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
E1212 00:05:39.559517   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr: (18.830335619s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565823 -n ha-565823
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 logs -n 25: (1.418797711s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m03_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m04 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp testdata/cp-test.txt                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m03 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565823 node stop m02 -v=7                                                     | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:58:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:58:49.879098  106017 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:58:49.879215  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879223  106017 out.go:358] Setting ErrFile to fd 2...
	I1211 23:58:49.879228  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879424  106017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:58:49.880067  106017 out.go:352] Setting JSON to false
	I1211 23:58:49.880934  106017 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9672,"bootTime":1733951858,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:58:49.881036  106017 start.go:139] virtualization: kvm guest
	I1211 23:58:49.883482  106017 out.go:177] * [ha-565823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:58:49.884859  106017 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:58:49.884853  106017 notify.go:220] Checking for updates...
	I1211 23:58:49.887649  106017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:58:49.889057  106017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:58:49.890422  106017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:49.891732  106017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:58:49.893196  106017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:58:49.894834  106017 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:58:49.929647  106017 out.go:177] * Using the kvm2 driver based on user configuration
	I1211 23:58:49.931090  106017 start.go:297] selected driver: kvm2
	I1211 23:58:49.931102  106017 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:58:49.931118  106017 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:58:49.931896  106017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.931980  106017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:58:49.946877  106017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:58:49.946925  106017 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:58:49.947184  106017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:58:49.947219  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:58:49.947291  106017 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1211 23:58:49.947306  106017 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:58:49.947387  106017 start.go:340] cluster config:
	{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1211 23:58:49.947534  106017 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.949244  106017 out.go:177] * Starting "ha-565823" primary control-plane node in "ha-565823" cluster
	I1211 23:58:49.950461  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:58:49.950504  106017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:58:49.950517  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:58:49.950593  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:58:49.950607  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:58:49.950924  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:58:49.950947  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json: {Name:mk87ab89a0730849be8d507f8c0453b4c014ad9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:58:49.951100  106017 start.go:360] acquireMachinesLock for ha-565823: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:58:49.951143  106017 start.go:364] duration metric: took 25.725µs to acquireMachinesLock for "ha-565823"
	I1211 23:58:49.951167  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:58:49.951248  106017 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:58:49.952920  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:58:49.953077  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:49.953130  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:49.967497  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I1211 23:58:49.967981  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:49.968550  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:58:49.968587  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:49.968981  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:49.969194  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:58:49.969410  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:58:49.969566  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:58:49.969614  106017 client.go:168] LocalClient.Create starting
	I1211 23:58:49.969660  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:58:49.969702  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969727  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969804  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:58:49.969833  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969852  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969875  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:58:49.969887  106017 main.go:141] libmachine: (ha-565823) Calling .PreCreateCheck
	I1211 23:58:49.970228  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:58:49.970579  106017 main.go:141] libmachine: Creating machine...
	I1211 23:58:49.970592  106017 main.go:141] libmachine: (ha-565823) Calling .Create
	I1211 23:58:49.970720  106017 main.go:141] libmachine: (ha-565823) Creating KVM machine...
	I1211 23:58:49.971894  106017 main.go:141] libmachine: (ha-565823) DBG | found existing default KVM network
	I1211 23:58:49.972543  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:49.972397  106042 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1211 23:58:49.972595  106017 main.go:141] libmachine: (ha-565823) DBG | created network xml: 
	I1211 23:58:49.972612  106017 main.go:141] libmachine: (ha-565823) DBG | <network>
	I1211 23:58:49.972619  106017 main.go:141] libmachine: (ha-565823) DBG |   <name>mk-ha-565823</name>
	I1211 23:58:49.972628  106017 main.go:141] libmachine: (ha-565823) DBG |   <dns enable='no'/>
	I1211 23:58:49.972641  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972653  106017 main.go:141] libmachine: (ha-565823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1211 23:58:49.972659  106017 main.go:141] libmachine: (ha-565823) DBG |     <dhcp>
	I1211 23:58:49.972666  106017 main.go:141] libmachine: (ha-565823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1211 23:58:49.972678  106017 main.go:141] libmachine: (ha-565823) DBG |     </dhcp>
	I1211 23:58:49.972689  106017 main.go:141] libmachine: (ha-565823) DBG |   </ip>
	I1211 23:58:49.972696  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972705  106017 main.go:141] libmachine: (ha-565823) DBG | </network>
	I1211 23:58:49.972742  106017 main.go:141] libmachine: (ha-565823) DBG | 
	I1211 23:58:49.977592  106017 main.go:141] libmachine: (ha-565823) DBG | trying to create private KVM network mk-ha-565823 192.168.39.0/24...
	I1211 23:58:50.045920  106017 main.go:141] libmachine: (ha-565823) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.045945  106017 main.go:141] libmachine: (ha-565823) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:58:50.045957  106017 main.go:141] libmachine: (ha-565823) DBG | private KVM network mk-ha-565823 192.168.39.0/24 created
	I1211 23:58:50.045974  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.045851  106042 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.046037  106017 main.go:141] libmachine: (ha-565823) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:58:50.332532  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.332355  106042 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa...
	I1211 23:58:50.607374  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607211  106042 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk...
	I1211 23:58:50.607405  106017 main.go:141] libmachine: (ha-565823) DBG | Writing magic tar header
	I1211 23:58:50.607418  106017 main.go:141] libmachine: (ha-565823) DBG | Writing SSH key tar header
	I1211 23:58:50.607425  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607336  106042 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.607436  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823
	I1211 23:58:50.607514  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:58:50.607560  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 (perms=drwx------)
	I1211 23:58:50.607571  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.607581  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:58:50.607606  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:58:50.607624  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:58:50.607642  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:58:50.607654  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home
	I1211 23:58:50.607666  106017 main.go:141] libmachine: (ha-565823) DBG | Skipping /home - not owner
	I1211 23:58:50.607678  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:58:50.607687  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:58:50.607693  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:58:50.607704  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:58:50.607717  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:50.608802  106017 main.go:141] libmachine: (ha-565823) define libvirt domain using xml: 
	I1211 23:58:50.608821  106017 main.go:141] libmachine: (ha-565823) <domain type='kvm'>
	I1211 23:58:50.608828  106017 main.go:141] libmachine: (ha-565823)   <name>ha-565823</name>
	I1211 23:58:50.608832  106017 main.go:141] libmachine: (ha-565823)   <memory unit='MiB'>2200</memory>
	I1211 23:58:50.608838  106017 main.go:141] libmachine: (ha-565823)   <vcpu>2</vcpu>
	I1211 23:58:50.608842  106017 main.go:141] libmachine: (ha-565823)   <features>
	I1211 23:58:50.608846  106017 main.go:141] libmachine: (ha-565823)     <acpi/>
	I1211 23:58:50.608850  106017 main.go:141] libmachine: (ha-565823)     <apic/>
	I1211 23:58:50.608857  106017 main.go:141] libmachine: (ha-565823)     <pae/>
	I1211 23:58:50.608868  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.608875  106017 main.go:141] libmachine: (ha-565823)   </features>
	I1211 23:58:50.608879  106017 main.go:141] libmachine: (ha-565823)   <cpu mode='host-passthrough'>
	I1211 23:58:50.608887  106017 main.go:141] libmachine: (ha-565823)   
	I1211 23:58:50.608891  106017 main.go:141] libmachine: (ha-565823)   </cpu>
	I1211 23:58:50.608898  106017 main.go:141] libmachine: (ha-565823)   <os>
	I1211 23:58:50.608902  106017 main.go:141] libmachine: (ha-565823)     <type>hvm</type>
	I1211 23:58:50.608977  106017 main.go:141] libmachine: (ha-565823)     <boot dev='cdrom'/>
	I1211 23:58:50.609011  106017 main.go:141] libmachine: (ha-565823)     <boot dev='hd'/>
	I1211 23:58:50.609024  106017 main.go:141] libmachine: (ha-565823)     <bootmenu enable='no'/>
	I1211 23:58:50.609036  106017 main.go:141] libmachine: (ha-565823)   </os>
	I1211 23:58:50.609043  106017 main.go:141] libmachine: (ha-565823)   <devices>
	I1211 23:58:50.609052  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='cdrom'>
	I1211 23:58:50.609063  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/boot2docker.iso'/>
	I1211 23:58:50.609074  106017 main.go:141] libmachine: (ha-565823)       <target dev='hdc' bus='scsi'/>
	I1211 23:58:50.609085  106017 main.go:141] libmachine: (ha-565823)       <readonly/>
	I1211 23:58:50.609094  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609105  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='disk'>
	I1211 23:58:50.609117  106017 main.go:141] libmachine: (ha-565823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:58:50.609133  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk'/>
	I1211 23:58:50.609144  106017 main.go:141] libmachine: (ha-565823)       <target dev='hda' bus='virtio'/>
	I1211 23:58:50.609154  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609164  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609176  106017 main.go:141] libmachine: (ha-565823)       <source network='mk-ha-565823'/>
	I1211 23:58:50.609187  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609198  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609209  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609221  106017 main.go:141] libmachine: (ha-565823)       <source network='default'/>
	I1211 23:58:50.609230  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609240  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609249  106017 main.go:141] libmachine: (ha-565823)     <serial type='pty'>
	I1211 23:58:50.609271  106017 main.go:141] libmachine: (ha-565823)       <target port='0'/>
	I1211 23:58:50.609292  106017 main.go:141] libmachine: (ha-565823)     </serial>
	I1211 23:58:50.609319  106017 main.go:141] libmachine: (ha-565823)     <console type='pty'>
	I1211 23:58:50.609342  106017 main.go:141] libmachine: (ha-565823)       <target type='serial' port='0'/>
	I1211 23:58:50.609358  106017 main.go:141] libmachine: (ha-565823)     </console>
	I1211 23:58:50.609368  106017 main.go:141] libmachine: (ha-565823)     <rng model='virtio'>
	I1211 23:58:50.609380  106017 main.go:141] libmachine: (ha-565823)       <backend model='random'>/dev/random</backend>
	I1211 23:58:50.609388  106017 main.go:141] libmachine: (ha-565823)     </rng>
	I1211 23:58:50.609393  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609399  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609404  106017 main.go:141] libmachine: (ha-565823)   </devices>
	I1211 23:58:50.609412  106017 main.go:141] libmachine: (ha-565823) </domain>
	I1211 23:58:50.609425  106017 main.go:141] libmachine: (ha-565823) 
	I1211 23:58:50.614253  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:5a:5d:6a in network default
	I1211 23:58:50.614867  106017 main.go:141] libmachine: (ha-565823) Ensuring networks are active...
	I1211 23:58:50.614888  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:50.615542  106017 main.go:141] libmachine: (ha-565823) Ensuring network default is active
	I1211 23:58:50.615828  106017 main.go:141] libmachine: (ha-565823) Ensuring network mk-ha-565823 is active
	I1211 23:58:50.616242  106017 main.go:141] libmachine: (ha-565823) Getting domain xml...
	I1211 23:58:50.616898  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:51.817451  106017 main.go:141] libmachine: (ha-565823) Waiting to get IP...
	I1211 23:58:51.818184  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:51.818533  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:51.818576  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:51.818514  106042 retry.go:31] will retry after 280.301496ms: waiting for machine to come up
	I1211 23:58:52.100046  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.100502  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.100533  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.100451  106042 retry.go:31] will retry after 276.944736ms: waiting for machine to come up
	I1211 23:58:52.378928  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.379349  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.379382  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.379295  106042 retry.go:31] will retry after 389.022589ms: waiting for machine to come up
	I1211 23:58:52.769835  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.770314  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.770357  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.770269  106042 retry.go:31] will retry after 542.492277ms: waiting for machine to come up
	I1211 23:58:53.313855  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:53.314281  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:53.314305  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:53.314231  106042 retry.go:31] will retry after 742.209465ms: waiting for machine to come up
	I1211 23:58:54.058032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.058453  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.058490  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.058433  106042 retry.go:31] will retry after 754.421967ms: waiting for machine to come up
	I1211 23:58:54.814555  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.814980  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.815017  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.814915  106042 retry.go:31] will retry after 802.576471ms: waiting for machine to come up
	I1211 23:58:55.619852  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:55.620325  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:55.620362  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:55.620271  106042 retry.go:31] will retry after 1.192308346s: waiting for machine to come up
	I1211 23:58:56.815553  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:56.816025  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:56.816050  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:56.815966  106042 retry.go:31] will retry after 1.618860426s: waiting for machine to come up
	I1211 23:58:58.436766  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:58.437231  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:58.437256  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:58.437186  106042 retry.go:31] will retry after 2.219805666s: waiting for machine to come up
	I1211 23:59:00.658607  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:00.659028  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:00.659058  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:00.658968  106042 retry.go:31] will retry after 1.768582626s: waiting for machine to come up
	I1211 23:59:02.429943  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:02.430433  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:02.430464  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:02.430369  106042 retry.go:31] will retry after 2.185532844s: waiting for machine to come up
	I1211 23:59:04.617032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:04.617473  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:04.617499  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:04.617419  106042 retry.go:31] will retry after 4.346976865s: waiting for machine to come up
	I1211 23:59:08.969389  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:08.969741  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:08.969760  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:08.969711  106042 retry.go:31] will retry after 4.969601196s: waiting for machine to come up
	I1211 23:59:13.943658  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944048  106017 main.go:141] libmachine: (ha-565823) Found IP for machine: 192.168.39.19
	I1211 23:59:13.944063  106017 main.go:141] libmachine: (ha-565823) Reserving static IP address...
	I1211 23:59:13.944071  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has current primary IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944392  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "ha-565823", mac: "52:54:00:2b:2e:da", ip: "192.168.39.19"} in network mk-ha-565823
	I1211 23:59:14.015315  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:14.015347  106017 main.go:141] libmachine: (ha-565823) Reserved static IP address: 192.168.39.19
	I1211 23:59:14.015425  106017 main.go:141] libmachine: (ha-565823) Waiting for SSH to be available...
	I1211 23:59:14.017689  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:14.018021  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823
	I1211 23:59:14.018050  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find defined IP address of network mk-ha-565823 interface with MAC address 52:54:00:2b:2e:da
	I1211 23:59:14.018183  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:14.018223  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:14.018268  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:14.018288  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:14.018327  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:14.021958  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: exit status 255: 
	I1211 23:59:14.021983  106017 main.go:141] libmachine: (ha-565823) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1211 23:59:14.021992  106017 main.go:141] libmachine: (ha-565823) DBG | command : exit 0
	I1211 23:59:14.022004  106017 main.go:141] libmachine: (ha-565823) DBG | err     : exit status 255
	I1211 23:59:14.022014  106017 main.go:141] libmachine: (ha-565823) DBG | output  : 
	I1211 23:59:17.023677  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:17.026110  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026503  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.026529  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026696  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:17.026723  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:17.026749  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:17.026776  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:17.026792  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:17.155941  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: <nil>: 
	I1211 23:59:17.156245  106017 main.go:141] libmachine: (ha-565823) KVM machine creation complete!
	I1211 23:59:17.156531  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:17.157110  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157306  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157460  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1211 23:59:17.157473  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:17.158855  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1211 23:59:17.158893  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1211 23:59:17.158902  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1211 23:59:17.158918  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.161015  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161305  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.161347  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161435  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.161600  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161751  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161869  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.162043  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.162241  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.162251  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1211 23:59:17.270900  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.270927  106017 main.go:141] libmachine: Detecting the provisioner...
	I1211 23:59:17.270938  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.273797  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274144  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.274170  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274323  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.274499  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274631  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274743  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.274871  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.275034  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.275045  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1211 23:59:17.388514  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1211 23:59:17.388598  106017 main.go:141] libmachine: found compatible host: buildroot
	I1211 23:59:17.388612  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1211 23:59:17.388622  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.388876  106017 buildroot.go:166] provisioning hostname "ha-565823"
	I1211 23:59:17.388901  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.389119  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.391763  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392089  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.392117  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392206  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.392374  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392583  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392750  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.392900  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.393085  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.393098  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823 && echo "ha-565823" | sudo tee /etc/hostname
	I1211 23:59:17.517872  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1211 23:59:17.517906  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.520794  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521115  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.521139  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521316  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.521505  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521649  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521748  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.521909  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.522131  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.522150  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:59:17.641444  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.641473  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1211 23:59:17.641523  106017 buildroot.go:174] setting up certificates
	I1211 23:59:17.641537  106017 provision.go:84] configureAuth start
	I1211 23:59:17.641550  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.641858  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:17.644632  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.644929  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.644969  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.645145  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.647106  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647440  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.647460  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647633  106017 provision.go:143] copyHostCerts
	I1211 23:59:17.647667  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647703  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1211 23:59:17.647712  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647777  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1211 23:59:17.647854  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647873  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1211 23:59:17.647879  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647903  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1211 23:59:17.647943  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647959  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1211 23:59:17.647965  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647985  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1211 23:59:17.648036  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823 san=[127.0.0.1 192.168.39.19 ha-565823 localhost minikube]
	I1211 23:59:17.803088  106017 provision.go:177] copyRemoteCerts
	I1211 23:59:17.803154  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:59:17.803180  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.806065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806383  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.806401  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806621  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.806836  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.806981  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.807172  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:17.894618  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 23:59:17.894691  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 23:59:17.921956  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 23:59:17.922023  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 23:59:17.948821  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 23:59:17.948890  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1211 23:59:17.975580  106017 provision.go:87] duration metric: took 334.027463ms to configureAuth
	I1211 23:59:17.975634  106017 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:59:17.975827  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:17.975904  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.978577  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.978850  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.978901  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.979082  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.979257  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979385  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979493  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.979692  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.979889  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.979912  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:59:18.235267  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:59:18.235313  106017 main.go:141] libmachine: Checking connection to Docker...
	I1211 23:59:18.235325  106017 main.go:141] libmachine: (ha-565823) Calling .GetURL
	I1211 23:59:18.236752  106017 main.go:141] libmachine: (ha-565823) DBG | Using libvirt version 6000000
	I1211 23:59:18.239115  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239502  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.239532  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239731  106017 main.go:141] libmachine: Docker is up and running!
	I1211 23:59:18.239753  106017 main.go:141] libmachine: Reticulating splines...
	I1211 23:59:18.239771  106017 client.go:171] duration metric: took 28.270144196s to LocalClient.Create
	I1211 23:59:18.239864  106017 start.go:167] duration metric: took 28.27029823s to libmachine.API.Create "ha-565823"
	I1211 23:59:18.239885  106017 start.go:293] postStartSetup for "ha-565823" (driver="kvm2")
	I1211 23:59:18.239895  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:59:18.239917  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.240179  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:59:18.240211  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.242164  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242466  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.242493  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242645  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.242832  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.242993  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.243119  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.330660  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:59:18.335424  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1211 23:59:18.335447  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1211 23:59:18.335503  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1211 23:59:18.335574  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1211 23:59:18.335584  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1211 23:59:18.335717  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 23:59:18.346001  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:18.374524  106017 start.go:296] duration metric: took 134.623519ms for postStartSetup
	I1211 23:59:18.374583  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:18.375295  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.377900  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378234  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.378262  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378516  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:18.378710  106017 start.go:128] duration metric: took 28.427447509s to createHost
	I1211 23:59:18.378738  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.380862  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381196  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.381220  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381358  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.381537  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381691  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381809  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.381919  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:18.382120  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:18.382133  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:59:18.492450  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961558.472734336
	
	I1211 23:59:18.492473  106017 fix.go:216] guest clock: 1733961558.472734336
	I1211 23:59:18.492480  106017 fix.go:229] Guest: 2024-12-11 23:59:18.472734336 +0000 UTC Remote: 2024-12-11 23:59:18.378724497 +0000 UTC m=+28.540551547 (delta=94.009839ms)
	I1211 23:59:18.492521  106017 fix.go:200] guest clock delta is within tolerance: 94.009839ms
	I1211 23:59:18.492529  106017 start.go:83] releasing machines lock for "ha-565823", held for 28.541373742s
	I1211 23:59:18.492553  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.492820  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.495388  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495716  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.495743  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495888  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496371  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496534  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496615  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:59:18.496654  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.496714  106017 ssh_runner.go:195] Run: cat /version.json
	I1211 23:59:18.496740  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.499135  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499486  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499548  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499569  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499675  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.499845  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.499921  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499961  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499985  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500123  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.500135  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.500278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.500460  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500604  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.607330  106017 ssh_runner.go:195] Run: systemctl --version
	I1211 23:59:18.613387  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:59:18.776622  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:59:18.783443  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:59:18.783538  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:59:18.799688  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:59:18.799713  106017 start.go:495] detecting cgroup driver to use...
	I1211 23:59:18.799774  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:59:18.816025  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:59:18.830854  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:59:18.830908  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:59:18.845980  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:59:18.860893  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:59:18.978441  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:59:19.134043  106017 docker.go:233] disabling docker service ...
	I1211 23:59:19.134112  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:59:19.149156  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:59:19.162275  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:59:19.283529  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:59:19.409189  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:59:19.423558  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:59:19.442528  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:59:19.442599  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.453566  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:59:19.453654  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.464397  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.475199  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.486049  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:59:19.497021  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.507803  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.524919  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.535844  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:59:19.545546  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:59:19.545598  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:59:19.559407  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:59:19.569383  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:19.689090  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:59:19.791744  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:59:19.791811  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:59:19.796877  106017 start.go:563] Will wait 60s for crictl version
	I1211 23:59:19.796945  106017 ssh_runner.go:195] Run: which crictl
	I1211 23:59:19.801083  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:59:19.845670  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:59:19.845758  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.875253  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.904311  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1211 23:59:19.906690  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:19.909356  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.909726  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:19.909755  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.910412  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:59:19.915735  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:19.929145  106017 kubeadm.go:883] updating cluster {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:59:19.929263  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:19.929323  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:19.962567  106017 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1211 23:59:19.962636  106017 ssh_runner.go:195] Run: which lz4
	I1211 23:59:19.966688  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1211 23:59:19.966797  106017 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:59:19.970897  106017 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:59:19.970929  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1211 23:59:21.360986  106017 crio.go:462] duration metric: took 1.394221262s to copy over tarball
	I1211 23:59:21.361088  106017 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:59:23.449972  106017 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088850329s)
	I1211 23:59:23.450033  106017 crio.go:469] duration metric: took 2.08900198s to extract the tarball
	I1211 23:59:23.450045  106017 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:59:23.487452  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:23.534823  106017 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:59:23.534855  106017 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:59:23.534866  106017 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.2 crio true true} ...
	I1211 23:59:23.535012  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:59:23.535085  106017 ssh_runner.go:195] Run: crio config
	I1211 23:59:23.584878  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:23.584896  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:23.584905  106017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:59:23.584925  106017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565823 NodeName:ha-565823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:59:23.585039  106017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:59:23.585064  106017 kube-vip.go:115] generating kube-vip config ...
	I1211 23:59:23.585112  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1211 23:59:23.603981  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1211 23:59:23.604115  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1211 23:59:23.604182  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:59:23.614397  106017 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:59:23.614477  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1211 23:59:23.624289  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1211 23:59:23.641517  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:59:23.658716  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1211 23:59:23.675660  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1211 23:59:23.692530  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1211 23:59:23.696599  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:23.709445  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:23.845220  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:59:23.862954  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.19
	I1211 23:59:23.862981  106017 certs.go:194] generating shared ca certs ...
	I1211 23:59:23.863000  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:23.863207  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1211 23:59:23.863251  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1211 23:59:23.863262  106017 certs.go:256] generating profile certs ...
	I1211 23:59:23.863328  106017 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1211 23:59:23.863357  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt with IP's: []
	I1211 23:59:24.110700  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt ...
	I1211 23:59:24.110730  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt: {Name:mk50d526eb9350fec1f3c58be1ef98b2039770b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.110932  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key ...
	I1211 23:59:24.110948  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key: {Name:mk947a896656d347feed0e5ddd7c2c37edce03fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.111050  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c
	I1211 23:59:24.111082  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I1211 23:59:24.333387  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c ...
	I1211 23:59:24.333420  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c: {Name:mkfc61798e61cb1d7ac0b35769a3179525ca368b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333599  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c ...
	I1211 23:59:24.333627  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c: {Name:mk4a04314c10f352160875e4af47370a91a0db88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333740  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1211 23:59:24.333840  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1211 23:59:24.333924  106017 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1211 23:59:24.333944  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt with IP's: []
	I1211 23:59:24.464961  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt ...
	I1211 23:59:24.464993  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt: {Name:mkbb1cf3b9047082cee6fcd6adaa9509e1729b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465183  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key ...
	I1211 23:59:24.465203  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key: {Name:mkc9ec571078b7167489918f5cf8f1ea61967aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465319  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 23:59:24.465348  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 23:59:24.465364  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 23:59:24.465387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 23:59:24.465405  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 23:59:24.465422  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 23:59:24.465435  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 23:59:24.465452  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 23:59:24.465528  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1211 23:59:24.465577  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1211 23:59:24.465592  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:59:24.465634  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1211 23:59:24.465664  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:59:24.465695  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1211 23:59:24.465752  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:24.465790  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.465812  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.465831  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.466545  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:59:24.494141  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:59:24.519556  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:59:24.544702  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:59:24.569766  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 23:59:24.595380  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:59:24.621226  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:59:24.649860  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 23:59:24.698075  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1211 23:59:24.728714  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:59:24.753139  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1211 23:59:24.777957  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:59:24.796289  106017 ssh_runner.go:195] Run: openssl version
	I1211 23:59:24.802883  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1211 23:59:24.816553  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821741  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821804  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.828574  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 23:59:24.840713  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:59:24.853013  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858281  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858331  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.864829  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:59:24.875963  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1211 23:59:24.886500  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891673  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891726  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.898344  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1211 23:59:24.910633  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:59:24.915220  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:59:24.915279  106017 kubeadm.go:392] StartCluster: {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:59:24.915383  106017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:59:24.915454  106017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:59:24.954743  106017 cri.go:89] found id: ""
	I1211 23:59:24.954813  106017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:59:24.965887  106017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:59:24.975963  106017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:59:24.985759  106017 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:59:24.985784  106017 kubeadm.go:157] found existing configuration files:
	
	I1211 23:59:24.985837  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:59:24.995322  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:59:24.995387  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:59:25.005782  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:59:25.015121  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:59:25.015216  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:59:25.024739  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.033898  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:59:25.033949  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.043527  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:59:25.052795  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:59:25.052860  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:59:25.063719  106017 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:59:25.172138  106017 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:59:25.172231  106017 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:59:25.282095  106017 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:59:25.282220  106017 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:59:25.282346  106017 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:59:25.292987  106017 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:59:25.507248  106017 out.go:235]   - Generating certificates and keys ...
	I1211 23:59:25.507374  106017 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:59:25.507500  106017 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:59:25.628233  106017 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:59:25.895094  106017 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:59:26.195266  106017 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:59:26.355531  106017 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:59:26.415298  106017 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:59:26.415433  106017 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.603280  106017 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:59:26.603516  106017 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.737544  106017 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:59:26.938736  106017 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:59:27.118447  106017 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:59:27.118579  106017 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:59:27.214058  106017 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:59:27.283360  106017 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:59:27.437118  106017 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:59:27.583693  106017 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:59:27.738001  106017 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:59:27.738673  106017 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:59:27.741933  106017 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:59:27.743702  106017 out.go:235]   - Booting up control plane ...
	I1211 23:59:27.743844  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:59:27.744424  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:59:27.746935  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:59:27.765392  106017 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:59:27.772566  106017 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:59:27.772699  106017 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:59:27.925671  106017 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:59:27.925813  106017 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:59:28.450340  106017 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 524.075614ms
	I1211 23:59:28.450451  106017 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:59:34.524805  106017 kubeadm.go:310] [api-check] The API server is healthy after 6.076898322s
	I1211 23:59:34.537381  106017 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:59:34.553285  106017 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:59:35.079814  106017 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:59:35.080057  106017 kubeadm.go:310] [mark-control-plane] Marking the node ha-565823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:59:35.095582  106017 kubeadm.go:310] [bootstrap-token] Using token: lktsit.hvyjnx8elfe20z7f
	I1211 23:59:35.097027  106017 out.go:235]   - Configuring RBAC rules ...
	I1211 23:59:35.097177  106017 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:59:35.101780  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:59:35.113593  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:59:35.118164  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:59:35.121511  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:59:35.125148  106017 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:59:35.144131  106017 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:59:35.407109  106017 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:59:35.930699  106017 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:59:35.931710  106017 kubeadm.go:310] 
	I1211 23:59:35.931771  106017 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:59:35.931775  106017 kubeadm.go:310] 
	I1211 23:59:35.931851  106017 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:59:35.931859  106017 kubeadm.go:310] 
	I1211 23:59:35.931880  106017 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:59:35.931927  106017 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:59:35.931982  106017 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:59:35.932000  106017 kubeadm.go:310] 
	I1211 23:59:35.932049  106017 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:59:35.932058  106017 kubeadm.go:310] 
	I1211 23:59:35.932118  106017 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:59:35.932126  106017 kubeadm.go:310] 
	I1211 23:59:35.932168  106017 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:59:35.932259  106017 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:59:35.932333  106017 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:59:35.932350  106017 kubeadm.go:310] 
	I1211 23:59:35.932432  106017 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:59:35.932499  106017 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:59:35.932506  106017 kubeadm.go:310] 
	I1211 23:59:35.932579  106017 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.932666  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1211 23:59:35.932687  106017 kubeadm.go:310] 	--control-plane 
	I1211 23:59:35.932692  106017 kubeadm.go:310] 
	I1211 23:59:35.932780  106017 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:59:35.932793  106017 kubeadm.go:310] 
	I1211 23:59:35.932900  106017 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.933031  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1211 23:59:35.933914  106017 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:59:35.934034  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:35.934056  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:35.936050  106017 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1211 23:59:35.937506  106017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:59:35.943577  106017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1211 23:59:35.943610  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:59:35.964609  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:59:36.354699  106017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:59:36.354799  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:36.354832  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823 minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=true
	I1211 23:59:36.386725  106017 ops.go:34] apiserver oom_adj: -16
	I1211 23:59:36.511318  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.011972  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.511719  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.012059  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.511637  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.012451  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.512222  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.012218  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.512204  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.605442  106017 kubeadm.go:1113] duration metric: took 4.250718988s to wait for elevateKubeSystemPrivileges
	I1211 23:59:40.605479  106017 kubeadm.go:394] duration metric: took 15.690206878s to StartCluster
	I1211 23:59:40.605505  106017 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.605593  106017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.606578  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.606860  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:59:40.606860  106017 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:40.606883  106017 start.go:241] waiting for startup goroutines ...
	I1211 23:59:40.606899  106017 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 23:59:40.606982  106017 addons.go:69] Setting storage-provisioner=true in profile "ha-565823"
	I1211 23:59:40.606989  106017 addons.go:69] Setting default-storageclass=true in profile "ha-565823"
	I1211 23:59:40.607004  106017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565823"
	I1211 23:59:40.607018  106017 addons.go:234] Setting addon storage-provisioner=true in "ha-565823"
	I1211 23:59:40.607045  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.607426  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607469  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.607635  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:40.607793  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607838  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.622728  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1211 23:59:40.622807  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1211 23:59:40.623266  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623370  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623966  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.623993  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624004  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.624015  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624390  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624398  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624567  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.624920  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.624961  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.626695  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.627009  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 23:59:40.627499  106017 cert_rotation.go:140] Starting client certificate rotation controller
	I1211 23:59:40.627813  106017 addons.go:234] Setting addon default-storageclass=true in "ha-565823"
	I1211 23:59:40.627859  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.628133  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.628177  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.640869  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I1211 23:59:40.641437  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.642016  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.642043  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.642434  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.642635  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.643106  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I1211 23:59:40.643674  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.644240  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.644275  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.644588  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.644640  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.645087  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.645136  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.646489  106017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:59:40.647996  106017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.648015  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:59:40.648030  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.651165  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651679  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.651703  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651939  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.652136  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.652353  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.652515  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.661089  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I1211 23:59:40.661521  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.661949  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.661970  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.662302  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.662464  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.664023  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.664204  106017 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:40.664219  106017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:59:40.664234  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.666799  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667194  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.667218  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667366  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.667518  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.667676  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.667787  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.766556  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:59:40.838934  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.853931  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:41.384410  106017 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:59:41.687789  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.687839  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688024  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688044  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688143  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688158  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688166  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688175  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688183  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688295  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688316  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688337  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688398  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688424  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688407  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688511  106017 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 23:59:41.688531  106017 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 23:59:41.688635  106017 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1211 23:59:41.688642  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.688654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.688660  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.689067  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.689084  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.689112  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.703120  106017 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1211 23:59:41.703858  106017 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1211 23:59:41.703876  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.703888  106017 round_trippers.go:473]     Content-Type: application/json
	I1211 23:59:41.703896  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.703902  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.707451  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1211 23:59:41.707880  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.707905  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.708200  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.708289  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.708309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.710098  106017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1211 23:59:41.711624  106017 addons.go:510] duration metric: took 1.104728302s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1211 23:59:41.711657  106017 start.go:246] waiting for cluster config update ...
	I1211 23:59:41.711669  106017 start.go:255] writing updated cluster config ...
	I1211 23:59:41.713334  106017 out.go:201] 
	I1211 23:59:41.714788  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:41.714856  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.716555  106017 out.go:177] * Starting "ha-565823-m02" control-plane node in "ha-565823" cluster
	I1211 23:59:41.717794  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:41.717815  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:59:41.717923  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:59:41.717935  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:59:41.717999  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.718156  106017 start.go:360] acquireMachinesLock for ha-565823-m02: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:59:41.718199  106017 start.go:364] duration metric: took 25.794µs to acquireMachinesLock for "ha-565823-m02"
	I1211 23:59:41.718224  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:41.718291  106017 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1211 23:59:41.719692  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:59:41.719777  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:41.719812  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:41.734465  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1211 23:59:41.734950  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:41.735455  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:41.735478  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:41.735843  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:41.736006  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1211 23:59:41.736149  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1211 23:59:41.736349  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:59:41.736395  106017 client.go:168] LocalClient.Create starting
	I1211 23:59:41.736425  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:59:41.736455  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736469  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736519  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:59:41.736537  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736547  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736559  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:59:41.736567  106017 main.go:141] libmachine: (ha-565823-m02) Calling .PreCreateCheck
	I1211 23:59:41.736735  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1211 23:59:41.737076  106017 main.go:141] libmachine: Creating machine...
	I1211 23:59:41.737091  106017 main.go:141] libmachine: (ha-565823-m02) Calling .Create
	I1211 23:59:41.737203  106017 main.go:141] libmachine: (ha-565823-m02) Creating KVM machine...
	I1211 23:59:41.738412  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing default KVM network
	I1211 23:59:41.738502  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing private KVM network mk-ha-565823
	I1211 23:59:41.738691  106017 main.go:141] libmachine: (ha-565823-m02) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:41.738735  106017 main.go:141] libmachine: (ha-565823-m02) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:59:41.738778  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:41.738685  106399 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:41.738888  106017 main.go:141] libmachine: (ha-565823-m02) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:59:42.010827  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.010671  106399 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa...
	I1211 23:59:42.081269  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081125  106399 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk...
	I1211 23:59:42.081297  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing magic tar header
	I1211 23:59:42.081315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing SSH key tar header
	I1211 23:59:42.081327  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081241  106399 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:42.081337  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02
	I1211 23:59:42.081349  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:59:42.081395  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 (perms=drwx------)
	I1211 23:59:42.081428  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:59:42.081445  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:42.081465  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:59:42.081477  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:59:42.081489  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:59:42.081497  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home
	I1211 23:59:42.081510  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:59:42.081524  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:59:42.081536  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Skipping /home - not owner
	I1211 23:59:42.081553  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:59:42.081564  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:59:42.081577  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:42.082570  106017 main.go:141] libmachine: (ha-565823-m02) define libvirt domain using xml: 
	I1211 23:59:42.082593  106017 main.go:141] libmachine: (ha-565823-m02) <domain type='kvm'>
	I1211 23:59:42.082600  106017 main.go:141] libmachine: (ha-565823-m02)   <name>ha-565823-m02</name>
	I1211 23:59:42.082605  106017 main.go:141] libmachine: (ha-565823-m02)   <memory unit='MiB'>2200</memory>
	I1211 23:59:42.082610  106017 main.go:141] libmachine: (ha-565823-m02)   <vcpu>2</vcpu>
	I1211 23:59:42.082618  106017 main.go:141] libmachine: (ha-565823-m02)   <features>
	I1211 23:59:42.082626  106017 main.go:141] libmachine: (ha-565823-m02)     <acpi/>
	I1211 23:59:42.082641  106017 main.go:141] libmachine: (ha-565823-m02)     <apic/>
	I1211 23:59:42.082671  106017 main.go:141] libmachine: (ha-565823-m02)     <pae/>
	I1211 23:59:42.082693  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.082705  106017 main.go:141] libmachine: (ha-565823-m02)   </features>
	I1211 23:59:42.082719  106017 main.go:141] libmachine: (ha-565823-m02)   <cpu mode='host-passthrough'>
	I1211 23:59:42.082728  106017 main.go:141] libmachine: (ha-565823-m02)   
	I1211 23:59:42.082736  106017 main.go:141] libmachine: (ha-565823-m02)   </cpu>
	I1211 23:59:42.082744  106017 main.go:141] libmachine: (ha-565823-m02)   <os>
	I1211 23:59:42.082754  106017 main.go:141] libmachine: (ha-565823-m02)     <type>hvm</type>
	I1211 23:59:42.082761  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='cdrom'/>
	I1211 23:59:42.082771  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='hd'/>
	I1211 23:59:42.082779  106017 main.go:141] libmachine: (ha-565823-m02)     <bootmenu enable='no'/>
	I1211 23:59:42.082792  106017 main.go:141] libmachine: (ha-565823-m02)   </os>
	I1211 23:59:42.082803  106017 main.go:141] libmachine: (ha-565823-m02)   <devices>
	I1211 23:59:42.082811  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='cdrom'>
	I1211 23:59:42.082828  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/boot2docker.iso'/>
	I1211 23:59:42.082836  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hdc' bus='scsi'/>
	I1211 23:59:42.082847  106017 main.go:141] libmachine: (ha-565823-m02)       <readonly/>
	I1211 23:59:42.082857  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082887  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='disk'>
	I1211 23:59:42.082908  106017 main.go:141] libmachine: (ha-565823-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:59:42.082928  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk'/>
	I1211 23:59:42.082944  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hda' bus='virtio'/>
	I1211 23:59:42.082957  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082968  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.082978  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='mk-ha-565823'/>
	I1211 23:59:42.082985  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.082990  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.082997  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.083003  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='default'/>
	I1211 23:59:42.083012  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.083025  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.083038  106017 main.go:141] libmachine: (ha-565823-m02)     <serial type='pty'>
	I1211 23:59:42.083047  106017 main.go:141] libmachine: (ha-565823-m02)       <target port='0'/>
	I1211 23:59:42.083054  106017 main.go:141] libmachine: (ha-565823-m02)     </serial>
	I1211 23:59:42.083065  106017 main.go:141] libmachine: (ha-565823-m02)     <console type='pty'>
	I1211 23:59:42.083077  106017 main.go:141] libmachine: (ha-565823-m02)       <target type='serial' port='0'/>
	I1211 23:59:42.083089  106017 main.go:141] libmachine: (ha-565823-m02)     </console>
	I1211 23:59:42.083098  106017 main.go:141] libmachine: (ha-565823-m02)     <rng model='virtio'>
	I1211 23:59:42.083112  106017 main.go:141] libmachine: (ha-565823-m02)       <backend model='random'>/dev/random</backend>
	I1211 23:59:42.083126  106017 main.go:141] libmachine: (ha-565823-m02)     </rng>
	I1211 23:59:42.083154  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083172  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083184  106017 main.go:141] libmachine: (ha-565823-m02)   </devices>
	I1211 23:59:42.083193  106017 main.go:141] libmachine: (ha-565823-m02) </domain>
	I1211 23:59:42.083206  106017 main.go:141] libmachine: (ha-565823-m02) 
	I1211 23:59:42.090031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:4e:60:e6 in network default
	I1211 23:59:42.090722  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring networks are active...
	I1211 23:59:42.090744  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:42.091386  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network default is active
	I1211 23:59:42.091728  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network mk-ha-565823 is active
	I1211 23:59:42.092172  106017 main.go:141] libmachine: (ha-565823-m02) Getting domain xml...
	I1211 23:59:42.092821  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:43.306722  106017 main.go:141] libmachine: (ha-565823-m02) Waiting to get IP...
	I1211 23:59:43.307541  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.307970  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.308021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.307943  106399 retry.go:31] will retry after 188.292611ms: waiting for machine to come up
	I1211 23:59:43.498538  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.498980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.499007  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.498936  106399 retry.go:31] will retry after 383.283577ms: waiting for machine to come up
	I1211 23:59:43.883676  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.884158  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.884186  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.884123  106399 retry.go:31] will retry after 368.673726ms: waiting for machine to come up
	I1211 23:59:44.254720  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.255182  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.255205  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.255142  106399 retry.go:31] will retry after 403.445822ms: waiting for machine to come up
	I1211 23:59:44.660664  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.661153  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.661178  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.661074  106399 retry.go:31] will retry after 718.942978ms: waiting for machine to come up
	I1211 23:59:45.382183  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:45.382736  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:45.382761  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:45.382694  106399 retry.go:31] will retry after 941.806671ms: waiting for machine to come up
	I1211 23:59:46.326070  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:46.326533  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:46.326566  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:46.326481  106399 retry.go:31] will retry after 1.01864437s: waiting for machine to come up
	I1211 23:59:47.347315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:47.347790  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:47.347812  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:47.347737  106399 retry.go:31] will retry after 1.213138s: waiting for machine to come up
	I1211 23:59:48.562238  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:48.562705  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:48.562737  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:48.562658  106399 retry.go:31] will retry after 1.846591325s: waiting for machine to come up
	I1211 23:59:50.410650  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:50.411116  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:50.411143  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:50.411072  106399 retry.go:31] will retry after 2.02434837s: waiting for machine to come up
	I1211 23:59:52.436763  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:52.437247  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:52.437276  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:52.437194  106399 retry.go:31] will retry after 1.785823174s: waiting for machine to come up
	I1211 23:59:54.224640  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:54.224948  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:54.224975  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:54.224901  106399 retry.go:31] will retry after 2.203569579s: waiting for machine to come up
	I1211 23:59:56.431378  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:56.431904  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:56.431933  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:56.431858  106399 retry.go:31] will retry after 3.94903919s: waiting for machine to come up
	I1212 00:00:00.384703  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:00.385175  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1212 00:00:00.385208  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1212 00:00:00.385121  106399 retry.go:31] will retry after 3.809627495s: waiting for machine to come up
	I1212 00:00:04.197607  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198181  106017 main.go:141] libmachine: (ha-565823-m02) Found IP for machine: 192.168.39.103
	I1212 00:00:04.198204  106017 main.go:141] libmachine: (ha-565823-m02) Reserving static IP address...
	I1212 00:00:04.198220  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has current primary IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198616  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find host DHCP lease matching {name: "ha-565823-m02", mac: "52:54:00:cc:31:80", ip: "192.168.39.103"} in network mk-ha-565823
	I1212 00:00:04.273114  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Getting to WaitForSSH function...
	I1212 00:00:04.273143  106017 main.go:141] libmachine: (ha-565823-m02) Reserved static IP address: 192.168.39.103
	I1212 00:00:04.273155  106017 main.go:141] libmachine: (ha-565823-m02) Waiting for SSH to be available...
	I1212 00:00:04.275998  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276409  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.276438  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276561  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH client type: external
	I1212 00:00:04.276592  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa (-rw-------)
	I1212 00:00:04.276623  106017 main.go:141] libmachine: (ha-565823-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:00:04.276639  106017 main.go:141] libmachine: (ha-565823-m02) DBG | About to run SSH command:
	I1212 00:00:04.276655  106017 main.go:141] libmachine: (ha-565823-m02) DBG | exit 0
	I1212 00:00:04.400102  106017 main.go:141] libmachine: (ha-565823-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 00:00:04.400348  106017 main.go:141] libmachine: (ha-565823-m02) KVM machine creation complete!
	I1212 00:00:04.400912  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:04.401484  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401664  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401821  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:00:04.401837  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetState
	I1212 00:00:04.403174  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:00:04.403192  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:00:04.403199  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:00:04.403208  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.405388  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405786  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.405820  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405928  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.406109  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406313  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406472  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.406636  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.406846  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.406860  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:00:04.507379  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.507409  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:00:04.507426  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.510219  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510595  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.510633  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510776  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.511014  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511172  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511323  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.511507  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.511752  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.511765  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:00:04.612413  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:00:04.612516  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:00:04.612530  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:00:04.612538  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.612840  106017 buildroot.go:166] provisioning hostname "ha-565823-m02"
	I1212 00:00:04.612874  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.613079  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.615872  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616272  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.616326  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616447  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.616621  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616780  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616976  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.617134  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.617294  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.617306  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m02 && echo "ha-565823-m02" | sudo tee /etc/hostname
	I1212 00:00:04.736911  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m02
	
	I1212 00:00:04.736949  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.739899  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740287  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.740321  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740530  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.740723  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.740885  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.741022  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.741259  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.741462  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.741481  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:00:04.854133  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.854171  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:00:04.854189  106017 buildroot.go:174] setting up certificates
	I1212 00:00:04.854199  106017 provision.go:84] configureAuth start
	I1212 00:00:04.854213  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.854617  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:04.858031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858466  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.858492  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858772  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.860980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.861344  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861482  106017 provision.go:143] copyHostCerts
	I1212 00:00:04.861512  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861546  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:00:04.861556  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861621  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:00:04.861699  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861718  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:00:04.861725  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861748  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:00:04.861792  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861809  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:00:04.861815  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861836  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:00:04.861892  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m02 san=[127.0.0.1 192.168.39.103 ha-565823-m02 localhost minikube]
	I1212 00:00:05.017387  106017 provision.go:177] copyRemoteCerts
	I1212 00:00:05.017447  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:00:05.017475  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.020320  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020751  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.020781  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020994  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.021285  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.021461  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.021631  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.103134  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:00:05.103225  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:00:05.128318  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:00:05.128392  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:00:05.152814  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:00:05.152893  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:00:05.177479  106017 provision.go:87] duration metric: took 323.264224ms to configureAuth
	I1212 00:00:05.177509  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:00:05.177674  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:05.177748  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.180791  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181249  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.181280  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181463  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.181702  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.181870  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.182010  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.182176  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.182341  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.182357  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:00:05.417043  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:00:05.417067  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:00:05.417075  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetURL
	I1212 00:00:05.418334  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using libvirt version 6000000
	I1212 00:00:05.420596  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.420905  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.420938  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.421114  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:00:05.421129  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:00:05.421139  106017 client.go:171] duration metric: took 23.684732891s to LocalClient.Create
	I1212 00:00:05.421170  106017 start.go:167] duration metric: took 23.684823561s to libmachine.API.Create "ha-565823"
	I1212 00:00:05.421183  106017 start.go:293] postStartSetup for "ha-565823-m02" (driver="kvm2")
	I1212 00:00:05.421197  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:00:05.421214  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.421468  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:00:05.421495  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.424694  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425050  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.425083  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425238  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.425449  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.425599  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.425739  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.506562  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:00:05.511891  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:00:05.511921  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:00:05.512000  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:00:05.512114  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:00:05.512128  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:00:05.512236  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:00:05.525426  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:05.552318  106017 start.go:296] duration metric: took 131.1154ms for postStartSetup
	I1212 00:00:05.552386  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:05.553038  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.556173  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556661  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.556704  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556972  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:05.557179  106017 start.go:128] duration metric: took 23.838875142s to createHost
	I1212 00:00:05.557206  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.559644  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560000  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.560021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560242  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.560469  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560659  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560833  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.561033  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.561234  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.561248  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:00:05.664479  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961605.636878321
	
	I1212 00:00:05.664504  106017 fix.go:216] guest clock: 1733961605.636878321
	I1212 00:00:05.664511  106017 fix.go:229] Guest: 2024-12-12 00:00:05.636878321 +0000 UTC Remote: 2024-12-12 00:00:05.557193497 +0000 UTC m=+75.719020541 (delta=79.684824ms)
	I1212 00:00:05.664529  106017 fix.go:200] guest clock delta is within tolerance: 79.684824ms
	I1212 00:00:05.664536  106017 start.go:83] releasing machines lock for "ha-565823-m02", held for 23.946326821s
	I1212 00:00:05.664559  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.664834  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.667309  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.667587  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.667625  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.670169  106017 out.go:177] * Found network options:
	I1212 00:00:05.671775  106017 out.go:177]   - NO_PROXY=192.168.39.19
	W1212 00:00:05.673420  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.673451  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.673974  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674184  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674310  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:00:05.674362  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	W1212 00:00:05.674404  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.674488  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:00:05.674510  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.677209  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677558  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.677588  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677632  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677782  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.677967  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678067  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.678094  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.678133  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678286  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.678288  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.678440  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678560  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678668  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.906824  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:00:05.913945  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:00:05.914026  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:00:05.931775  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:00:05.931797  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:00:05.931857  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:00:05.948556  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:00:05.963326  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:00:05.963397  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:00:05.978208  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:00:05.992483  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:00:06.103988  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:00:06.275509  106017 docker.go:233] disabling docker service ...
	I1212 00:00:06.275580  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:00:06.293042  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:00:06.306048  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:00:06.431702  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:00:06.557913  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:00:06.573066  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:00:06.592463  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:00:06.592536  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.604024  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:00:06.604087  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.615267  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.626194  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.637083  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:00:06.648061  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.659477  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.677134  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.687875  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:00:06.701376  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:00:06.701451  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:00:06.714621  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:00:06.724651  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:06.844738  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:00:06.941123  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:00:06.941186  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:00:06.946025  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:00:06.946103  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:00:06.950454  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:00:06.989220  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:00:06.989302  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.018407  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.049375  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:00:07.051430  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:00:07.052588  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:07.055087  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055359  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:07.055377  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055577  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:00:07.059718  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:07.072121  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:00:07.072328  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:07.072649  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.072692  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.087345  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I1212 00:00:07.087790  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.088265  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.088285  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.088623  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.088818  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:00:07.090394  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:07.090786  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.090832  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.107441  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I1212 00:00:07.107836  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.108308  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.108327  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.108632  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.108786  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:07.108915  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.103
	I1212 00:00:07.108926  106017 certs.go:194] generating shared ca certs ...
	I1212 00:00:07.108939  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.109062  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:00:07.109105  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:00:07.109114  106017 certs.go:256] generating profile certs ...
	I1212 00:00:07.109178  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:00:07.109202  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc
	I1212 00:00:07.109217  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.254]
	I1212 00:00:07.203114  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc ...
	I1212 00:00:07.203150  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc: {Name:mk3a75c055b0a829a056d90903c78ae5decf9bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203349  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc ...
	I1212 00:00:07.203372  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc: {Name:mkce850d5486843203391b76609d5fd65c614c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203468  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:00:07.203647  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:00:07.203815  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:00:07.203836  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:00:07.203855  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:00:07.203870  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:00:07.203891  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:00:07.203909  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:00:07.203931  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:00:07.203949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:00:07.203968  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:00:07.204035  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:00:07.204078  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:00:07.204113  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:00:07.204170  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:00:07.204217  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:00:07.204255  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:00:07.204310  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:07.204351  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.204383  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.204402  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.204445  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:07.207043  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207413  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:07.207439  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207647  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:07.207863  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:07.208027  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:07.208177  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:07.288012  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:00:07.293204  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:00:07.304789  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:00:07.310453  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:00:07.321124  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:00:07.326057  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:00:07.337737  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:00:07.342691  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:00:07.354806  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:00:07.359143  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:00:07.371799  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:00:07.376295  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:00:07.387705  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:00:07.415288  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:00:07.440414  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:00:07.466177  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:00:07.490907  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 00:00:07.517228  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:00:07.542858  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:00:07.567465  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:00:07.592181  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:00:07.616218  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:00:07.641063  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:00:07.665682  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:00:07.683443  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:00:07.700820  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:00:07.718283  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:00:07.735173  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:00:07.752079  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:00:07.770479  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:00:07.789102  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:00:07.795248  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:00:07.806811  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811750  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811816  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.818034  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:00:07.829409  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:00:07.840952  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845782  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845853  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.851849  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:00:07.863158  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:00:07.875091  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880111  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880173  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.886325  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:00:07.897750  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:00:07.902056  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:00:07.902131  106017 kubeadm.go:934] updating node {m02 192.168.39.103 8443 v1.31.2 crio true true} ...
	I1212 00:00:07.902244  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:00:07.902279  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:00:07.902323  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:00:07.920010  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:00:07.920099  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:00:07.920166  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.930159  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:00:07.930221  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.939751  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:00:07.939776  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939831  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939835  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1212 00:00:07.939861  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1212 00:00:07.944054  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:00:07.944086  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:00:09.149265  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:09.168056  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.168181  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.173566  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:00:09.173601  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:00:09.219150  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.219238  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.234545  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:00:09.234589  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:00:09.726465  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:00:09.736811  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1212 00:00:09.753799  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:00:09.771455  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:00:09.789916  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:00:09.794008  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:09.807290  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:09.944370  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:09.973225  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:09.973893  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:09.973959  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:09.989196  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I1212 00:00:09.989723  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:09.990363  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:09.990386  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:09.990735  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:09.990931  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:09.991104  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:00:09.991225  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:00:09.991249  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:09.994437  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995018  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:09.995065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995202  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:09.995448  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:09.995585  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:09.995765  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:10.156968  106017 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:10.157029  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443"
	I1212 00:00:31.347275  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443": (21.190211224s)
	I1212 00:00:31.347321  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:00:31.826934  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m02 minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:00:32.001431  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:00:32.141631  106017 start.go:319] duration metric: took 22.150523355s to joinCluster
	I1212 00:00:32.141725  106017 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:32.141997  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:32.143552  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:00:32.145227  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:32.332043  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:32.348508  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:00:32.348864  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:00:32.348951  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:00:32.349295  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:32.349423  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.349436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.349449  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.349460  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.362203  106017 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 00:00:32.850412  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.850436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.850447  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.850455  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.854786  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.349683  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.349718  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.354356  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.849742  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.849766  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.849774  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.849778  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.854313  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.350516  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.350539  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.350547  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.350551  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.355023  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.355775  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:34.850173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.850197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.850206  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.850210  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.853276  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.350529  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.350560  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.350568  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.350574  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.354219  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.850352  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.850378  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.850386  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.850391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.853507  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.349531  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.349555  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.349566  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.349572  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.353110  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.849604  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.849629  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.849640  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.849645  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.856046  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:36.856697  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:37.349961  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.349980  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.349989  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.349993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.354377  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:37.849622  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.849647  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.849660  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.849665  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.853494  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:38.349611  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.349641  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.349654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.349686  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.354211  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:38.850399  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.850424  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.850434  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.850440  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.854312  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.350249  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.350275  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.350288  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.350293  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.354293  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.355152  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:39.849553  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.849578  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.849587  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.849592  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.854321  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:40.350406  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.350438  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.350450  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.350456  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.354039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:40.850576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.850604  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.850615  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.850620  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.854393  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.349882  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.349908  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.349919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.349925  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.353612  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.849701  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.849723  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.849732  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.849737  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.852781  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.853447  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:42.349592  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.349615  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.349624  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.349629  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.352747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:42.849858  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.849881  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.849889  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.849894  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.853198  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.350237  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.350265  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.350274  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.350278  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.353850  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.850187  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.850215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.850227  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.850232  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.853783  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.854292  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:44.349681  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.349719  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.353562  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:44.849731  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.849764  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.849775  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.849783  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.853689  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.349741  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.349768  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.349777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.349781  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.353601  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.849492  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.849515  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.849524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.849528  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.853061  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:46.349543  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.349573  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.349584  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.349589  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.352599  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:46.353168  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:46.850149  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.850169  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.850177  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.850182  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.854205  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:47.350169  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.350191  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.350200  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.350206  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.353664  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:47.849752  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.849780  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.849793  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.849798  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.853354  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.350356  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.350379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.350387  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.350391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.353938  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.354537  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:48.849794  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.849820  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.849829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.849834  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.853163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.350186  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.350215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.350224  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.350229  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.353713  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.849652  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.849676  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.849684  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.849687  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.853033  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.350113  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.350142  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.350153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.350159  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.353742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.849593  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.849613  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.849621  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.849624  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.852952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.853510  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:51.349926  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.349948  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.349957  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.349963  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.353301  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:51.849615  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.849638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.849646  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.849655  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.853844  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.350547  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.350572  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.350580  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.350584  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.354248  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.850223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.850252  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.850263  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.850268  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.853470  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.854190  106017 node_ready.go:49] node "ha-565823-m02" has status "Ready":"True"
	I1212 00:00:52.854220  106017 node_ready.go:38] duration metric: took 20.504892955s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:52.854231  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:52.854318  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:52.854327  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.854334  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.854339  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.859106  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.865543  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.865630  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:00:52.865638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.865646  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.865651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.868523  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.869398  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.869413  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.869424  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.869431  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.871831  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.872543  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.872562  106017 pod_ready.go:82] duration metric: took 6.990987ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872571  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872619  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:00:52.872627  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.872633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.872639  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.874818  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.875523  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.875541  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.875551  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.875557  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.877466  106017 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:00:52.878112  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.878131  106017 pod_ready.go:82] duration metric: took 5.554087ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878140  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878190  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:00:52.878197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.878204  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.878211  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.880364  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.880870  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.880885  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.880891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.880895  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.883116  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.883560  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.883576  106017 pod_ready.go:82] duration metric: took 5.430598ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883587  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:00:52.883682  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.883691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.883700  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.886455  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.887079  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.887092  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.887099  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.887104  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.889373  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.889794  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.889810  106017 pod_ready.go:82] duration metric: took 6.198051ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.889825  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.051288  106017 request.go:632] Waited for 161.36947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051368  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.051390  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.051401  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.055000  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.251236  106017 request.go:632] Waited for 195.409824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251334  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251344  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.251352  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.251356  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.254773  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.255341  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.255360  106017 pod_ready.go:82] duration metric: took 365.529115ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.255371  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.450696  106017 request.go:632] Waited for 195.24618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450768  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450773  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.450782  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.450788  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.454132  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.650685  106017 request.go:632] Waited for 195.384956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650745  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650751  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.650758  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.650762  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.654400  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.655229  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.655251  106017 pod_ready.go:82] duration metric: took 399.872206ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.655268  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.850267  106017 request.go:632] Waited for 194.898023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850386  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.850398  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.850408  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.853683  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.050714  106017 request.go:632] Waited for 196.358846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050791  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050798  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.050810  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.050821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.056588  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:54.057030  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.057048  106017 pod_ready.go:82] duration metric: took 401.768958ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.057064  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.251122  106017 request.go:632] Waited for 193.98571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251196  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251202  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.251215  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.254477  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.451067  106017 request.go:632] Waited for 195.40262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451162  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451179  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.451188  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.451192  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.455097  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.455639  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.455655  106017 pod_ready.go:82] duration metric: took 398.584366ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.455670  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.650842  106017 request.go:632] Waited for 195.080577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650913  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650919  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.650926  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.650932  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.654798  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.851030  106017 request.go:632] Waited for 195.376895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851100  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851111  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.851123  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.851133  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.854879  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.855493  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.855509  106017 pod_ready.go:82] duration metric: took 399.831743ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.855522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.050825  106017 request.go:632] Waited for 195.216303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050891  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050897  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.050904  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.050910  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.055618  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.250720  106017 request.go:632] Waited for 194.371361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250781  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250786  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.250795  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.250802  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.255100  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.255613  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.255633  106017 pod_ready.go:82] duration metric: took 400.104583ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.255659  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.450909  106017 request.go:632] Waited for 195.147666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450990  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450999  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.451016  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.451026  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.455430  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.650645  106017 request.go:632] Waited for 194.425591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650713  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650719  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.650727  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.650736  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.654680  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:55.655493  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.655512  106017 pod_ready.go:82] duration metric: took 399.840095ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.655522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.850696  106017 request.go:632] Waited for 195.072101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850769  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.850777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.850782  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.855247  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.050354  106017 request.go:632] Waited for 194.294814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050422  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050428  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.050438  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.050441  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.053971  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:56.054426  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:56.054442  106017 pod_ready.go:82] duration metric: took 398.914314ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:56.054455  106017 pod_ready.go:39] duration metric: took 3.200213001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:56.054475  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:00:56.054526  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:00:56.072661  106017 api_server.go:72] duration metric: took 23.930895419s to wait for apiserver process to appear ...
	I1212 00:00:56.072689  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:00:56.072711  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:00:56.077698  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:00:56.077790  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:00:56.077803  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.077813  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.077823  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.078602  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:00:56.078749  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:00:56.078777  106017 api_server.go:131] duration metric: took 6.080516ms to wait for apiserver health ...
	I1212 00:00:56.078787  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:00:56.251224  106017 request.go:632] Waited for 172.358728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251308  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251314  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.251322  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.251328  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.257604  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:56.263097  106017 system_pods.go:59] 17 kube-system pods found
	I1212 00:00:56.263131  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.263138  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.263146  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.263154  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.263159  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.263164  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.263168  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.263173  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.263179  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.263184  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.263191  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.263197  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.263203  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.263211  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.263216  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.263222  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.263228  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.263239  106017 system_pods.go:74] duration metric: took 184.44261ms to wait for pod list to return data ...
	I1212 00:00:56.263253  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:00:56.450737  106017 request.go:632] Waited for 187.395152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450799  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450805  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.450817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.450824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.455806  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.456064  106017 default_sa.go:45] found service account: "default"
	I1212 00:00:56.456083  106017 default_sa.go:55] duration metric: took 192.823176ms for default service account to be created ...
	I1212 00:00:56.456093  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:00:56.650300  106017 request.go:632] Waited for 194.107546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650380  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.650392  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.650403  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.656388  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:56.662029  106017 system_pods.go:86] 17 kube-system pods found
	I1212 00:00:56.662073  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.662082  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.662088  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.662094  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.662100  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.662108  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.662118  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.662124  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.662133  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.662140  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.662148  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.662153  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.662161  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.662165  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.662173  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.662178  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.662187  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.662196  106017 system_pods.go:126] duration metric: took 206.091251ms to wait for k8s-apps to be running ...
	I1212 00:00:56.662210  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:00:56.662262  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:56.679491  106017 system_svc.go:56] duration metric: took 17.268621ms WaitForService to wait for kubelet
	I1212 00:00:56.679526  106017 kubeadm.go:582] duration metric: took 24.537768524s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:00:56.679546  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:00:56.851276  106017 request.go:632] Waited for 171.630771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851341  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851347  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.851354  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.851363  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.856253  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.857605  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857634  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857650  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857655  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857661  106017 node_conditions.go:105] duration metric: took 178.109574ms to run NodePressure ...
	I1212 00:00:56.857683  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:00:56.857713  106017 start.go:255] writing updated cluster config ...
	I1212 00:00:56.859819  106017 out.go:201] 
	I1212 00:00:56.861355  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:56.861459  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.863133  106017 out.go:177] * Starting "ha-565823-m03" control-plane node in "ha-565823" cluster
	I1212 00:00:56.864330  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:00:56.864351  106017 cache.go:56] Caching tarball of preloaded images
	I1212 00:00:56.864443  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:00:56.864454  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:00:56.864537  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.864703  106017 start.go:360] acquireMachinesLock for ha-565823-m03: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:00:56.864743  106017 start.go:364] duration metric: took 22.236µs to acquireMachinesLock for "ha-565823-m03"
	I1212 00:00:56.864764  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:56.864862  106017 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1212 00:00:56.866313  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:00:56.866390  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:56.866430  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:56.881400  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1212 00:00:56.881765  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:56.882247  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:56.882274  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:56.882594  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:56.882778  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:00:56.882918  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:00:56.883084  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1212 00:00:56.883116  106017 client.go:168] LocalClient.Create starting
	I1212 00:00:56.883150  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:00:56.883194  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883215  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883281  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:00:56.883314  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883330  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883354  106017 main.go:141] libmachine: Running pre-create checks...
	I1212 00:00:56.883365  106017 main.go:141] libmachine: (ha-565823-m03) Calling .PreCreateCheck
	I1212 00:00:56.883572  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:00:56.883977  106017 main.go:141] libmachine: Creating machine...
	I1212 00:00:56.883994  106017 main.go:141] libmachine: (ha-565823-m03) Calling .Create
	I1212 00:00:56.884152  106017 main.go:141] libmachine: (ha-565823-m03) Creating KVM machine...
	I1212 00:00:56.885388  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing default KVM network
	I1212 00:00:56.885537  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing private KVM network mk-ha-565823
	I1212 00:00:56.885677  106017 main.go:141] libmachine: (ha-565823-m03) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:56.885696  106017 main.go:141] libmachine: (ha-565823-m03) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:00:56.885764  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:56.885674  106823 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:56.885859  106017 main.go:141] libmachine: (ha-565823-m03) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:00:57.157670  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.157529  106823 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa...
	I1212 00:00:57.207576  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207455  106823 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk...
	I1212 00:00:57.207627  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing magic tar header
	I1212 00:00:57.207643  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing SSH key tar header
	I1212 00:00:57.207726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207648  106823 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:57.207776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03
	I1212 00:00:57.207803  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 (perms=drwx------)
	I1212 00:00:57.207814  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:00:57.207826  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:57.207832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:00:57.207841  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:00:57.207846  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:00:57.207853  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home
	I1212 00:00:57.207859  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Skipping /home - not owner
	I1212 00:00:57.207869  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:00:57.207875  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:00:57.207903  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:00:57.207923  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:00:57.207937  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:00:57.207945  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:57.208764  106017 main.go:141] libmachine: (ha-565823-m03) define libvirt domain using xml: 
	I1212 00:00:57.208779  106017 main.go:141] libmachine: (ha-565823-m03) <domain type='kvm'>
	I1212 00:00:57.208785  106017 main.go:141] libmachine: (ha-565823-m03)   <name>ha-565823-m03</name>
	I1212 00:00:57.208790  106017 main.go:141] libmachine: (ha-565823-m03)   <memory unit='MiB'>2200</memory>
	I1212 00:00:57.208795  106017 main.go:141] libmachine: (ha-565823-m03)   <vcpu>2</vcpu>
	I1212 00:00:57.208799  106017 main.go:141] libmachine: (ha-565823-m03)   <features>
	I1212 00:00:57.208803  106017 main.go:141] libmachine: (ha-565823-m03)     <acpi/>
	I1212 00:00:57.208807  106017 main.go:141] libmachine: (ha-565823-m03)     <apic/>
	I1212 00:00:57.208816  106017 main.go:141] libmachine: (ha-565823-m03)     <pae/>
	I1212 00:00:57.208827  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.208832  106017 main.go:141] libmachine: (ha-565823-m03)   </features>
	I1212 00:00:57.208837  106017 main.go:141] libmachine: (ha-565823-m03)   <cpu mode='host-passthrough'>
	I1212 00:00:57.208849  106017 main.go:141] libmachine: (ha-565823-m03)   
	I1212 00:00:57.208858  106017 main.go:141] libmachine: (ha-565823-m03)   </cpu>
	I1212 00:00:57.208866  106017 main.go:141] libmachine: (ha-565823-m03)   <os>
	I1212 00:00:57.208875  106017 main.go:141] libmachine: (ha-565823-m03)     <type>hvm</type>
	I1212 00:00:57.208882  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='cdrom'/>
	I1212 00:00:57.208899  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='hd'/>
	I1212 00:00:57.208912  106017 main.go:141] libmachine: (ha-565823-m03)     <bootmenu enable='no'/>
	I1212 00:00:57.208918  106017 main.go:141] libmachine: (ha-565823-m03)   </os>
	I1212 00:00:57.208926  106017 main.go:141] libmachine: (ha-565823-m03)   <devices>
	I1212 00:00:57.208933  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='cdrom'>
	I1212 00:00:57.208946  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/boot2docker.iso'/>
	I1212 00:00:57.208957  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hdc' bus='scsi'/>
	I1212 00:00:57.208964  106017 main.go:141] libmachine: (ha-565823-m03)       <readonly/>
	I1212 00:00:57.208971  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.208981  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='disk'>
	I1212 00:00:57.208993  106017 main.go:141] libmachine: (ha-565823-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:00:57.209040  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk'/>
	I1212 00:00:57.209066  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hda' bus='virtio'/>
	I1212 00:00:57.209075  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.209092  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209105  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='mk-ha-565823'/>
	I1212 00:00:57.209114  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209125  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209136  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209145  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='default'/>
	I1212 00:00:57.209155  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209164  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209179  106017 main.go:141] libmachine: (ha-565823-m03)     <serial type='pty'>
	I1212 00:00:57.209191  106017 main.go:141] libmachine: (ha-565823-m03)       <target port='0'/>
	I1212 00:00:57.209198  106017 main.go:141] libmachine: (ha-565823-m03)     </serial>
	I1212 00:00:57.209211  106017 main.go:141] libmachine: (ha-565823-m03)     <console type='pty'>
	I1212 00:00:57.209219  106017 main.go:141] libmachine: (ha-565823-m03)       <target type='serial' port='0'/>
	I1212 00:00:57.209228  106017 main.go:141] libmachine: (ha-565823-m03)     </console>
	I1212 00:00:57.209238  106017 main.go:141] libmachine: (ha-565823-m03)     <rng model='virtio'>
	I1212 00:00:57.209275  106017 main.go:141] libmachine: (ha-565823-m03)       <backend model='random'>/dev/random</backend>
	I1212 00:00:57.209299  106017 main.go:141] libmachine: (ha-565823-m03)     </rng>
	I1212 00:00:57.209310  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209316  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209327  106017 main.go:141] libmachine: (ha-565823-m03)   </devices>
	I1212 00:00:57.209344  106017 main.go:141] libmachine: (ha-565823-m03) </domain>
	I1212 00:00:57.209358  106017 main.go:141] libmachine: (ha-565823-m03) 
	I1212 00:00:57.216296  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:a0:11:b6 in network default
	I1212 00:00:57.216833  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring networks are active...
	I1212 00:00:57.216849  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:57.217611  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network default is active
	I1212 00:00:57.217884  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network mk-ha-565823 is active
	I1212 00:00:57.218224  106017 main.go:141] libmachine: (ha-565823-m03) Getting domain xml...
	I1212 00:00:57.218920  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:58.452742  106017 main.go:141] libmachine: (ha-565823-m03) Waiting to get IP...
	I1212 00:00:58.453425  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.453790  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.453832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.453785  106823 retry.go:31] will retry after 272.104158ms: waiting for machine to come up
	I1212 00:00:58.727281  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.727898  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.727928  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.727841  106823 retry.go:31] will retry after 285.622453ms: waiting for machine to come up
	I1212 00:00:59.015493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.016037  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.016069  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.015997  106823 retry.go:31] will retry after 462.910385ms: waiting for machine to come up
	I1212 00:00:59.480661  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.481128  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.481154  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.481091  106823 retry.go:31] will retry after 428.639733ms: waiting for machine to come up
	I1212 00:00:59.911938  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.912474  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.912505  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.912415  106823 retry.go:31] will retry after 493.229639ms: waiting for machine to come up
	I1212 00:01:00.406997  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:00.407456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:00.407482  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:00.407400  106823 retry.go:31] will retry after 633.230425ms: waiting for machine to come up
	I1212 00:01:01.042449  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:01.042884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:01.042905  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:01.042838  106823 retry.go:31] will retry after 978.049608ms: waiting for machine to come up
	I1212 00:01:02.022776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:02.023212  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:02.023245  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:02.023153  106823 retry.go:31] will retry after 1.111513755s: waiting for machine to come up
	I1212 00:01:03.136308  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:03.136734  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:03.136763  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:03.136679  106823 retry.go:31] will retry after 1.728462417s: waiting for machine to come up
	I1212 00:01:04.867619  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:04.868118  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:04.868157  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:04.868052  106823 retry.go:31] will retry after 1.898297589s: waiting for machine to come up
	I1212 00:01:06.769272  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:06.769757  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:06.769825  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:06.769731  106823 retry.go:31] will retry after 1.922578081s: waiting for machine to come up
	I1212 00:01:08.693477  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:08.693992  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:08.694026  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:08.693918  106823 retry.go:31] will retry after 2.235570034s: waiting for machine to come up
	I1212 00:01:10.932341  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:10.932805  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:10.932827  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:10.932750  106823 retry.go:31] will retry after 4.200404272s: waiting for machine to come up
	I1212 00:01:15.136581  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:15.136955  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:15.136979  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:15.136906  106823 retry.go:31] will retry after 4.331994391s: waiting for machine to come up
	I1212 00:01:19.472184  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472659  106017 main.go:141] libmachine: (ha-565823-m03) Found IP for machine: 192.168.39.95
	I1212 00:01:19.472679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472686  106017 main.go:141] libmachine: (ha-565823-m03) Reserving static IP address...
	I1212 00:01:19.473105  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find host DHCP lease matching {name: "ha-565823-m03", mac: "52:54:00:03:bd:55", ip: "192.168.39.95"} in network mk-ha-565823
	I1212 00:01:19.544988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Getting to WaitForSSH function...
	I1212 00:01:19.545019  106017 main.go:141] libmachine: (ha-565823-m03) Reserved static IP address: 192.168.39.95
	I1212 00:01:19.545082  106017 main.go:141] libmachine: (ha-565823-m03) Waiting for SSH to be available...
	I1212 00:01:19.547914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548457  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.548493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548645  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH client type: external
	I1212 00:01:19.548672  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa (-rw-------)
	I1212 00:01:19.548700  106017 main.go:141] libmachine: (ha-565823-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:01:19.548714  106017 main.go:141] libmachine: (ha-565823-m03) DBG | About to run SSH command:
	I1212 00:01:19.548726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | exit 0
	I1212 00:01:19.675749  106017 main.go:141] libmachine: (ha-565823-m03) DBG | SSH cmd err, output: <nil>: 
	I1212 00:01:19.676029  106017 main.go:141] libmachine: (ha-565823-m03) KVM machine creation complete!
	I1212 00:01:19.676360  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:19.676900  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677088  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677296  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:01:19.677311  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetState
	I1212 00:01:19.678472  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:01:19.678488  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:01:19.678497  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:01:19.678505  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.680612  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.680988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.681021  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.681172  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.681326  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681449  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681545  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.681635  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.681832  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.681842  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:01:19.794939  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:19.794969  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:01:19.794980  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.797552  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.797884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.797916  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.798040  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.798220  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798369  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798507  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.798667  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.798834  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.798844  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:01:19.912451  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:01:19.912540  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:01:19.912555  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:01:19.912568  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912805  106017 buildroot.go:166] provisioning hostname "ha-565823-m03"
	I1212 00:01:19.912831  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912939  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.915606  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916027  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.916059  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916213  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.916386  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916533  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916630  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.916776  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.917012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.917027  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m03 && echo "ha-565823-m03" | sudo tee /etc/hostname
	I1212 00:01:20.047071  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m03
	
	I1212 00:01:20.047100  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.049609  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050009  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.050034  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050209  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.050389  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050537  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050700  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.050854  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.051086  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.051105  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:01:20.174838  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:20.174877  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:01:20.174898  106017 buildroot.go:174] setting up certificates
	I1212 00:01:20.174909  106017 provision.go:84] configureAuth start
	I1212 00:01:20.174924  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:20.175232  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.177664  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178007  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.178038  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178124  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.180472  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180778  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.180806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180963  106017 provision.go:143] copyHostCerts
	I1212 00:01:20.180995  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181046  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:01:20.181058  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181146  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:01:20.181242  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181266  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:01:20.181279  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181315  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:01:20.181387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181413  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:01:20.181419  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181456  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:01:20.181524  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m03 san=[127.0.0.1 192.168.39.95 ha-565823-m03 localhost minikube]
	I1212 00:01:20.442822  106017 provision.go:177] copyRemoteCerts
	I1212 00:01:20.442883  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:01:20.442916  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.445614  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.445950  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.445983  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.446122  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.446304  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.446460  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.446571  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.533808  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:01:20.533894  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:01:20.558631  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:01:20.558695  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:01:20.584088  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:01:20.584173  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:01:20.608061  106017 provision.go:87] duration metric: took 433.135165ms to configureAuth
	I1212 00:01:20.608090  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:01:20.608294  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:20.608371  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.611003  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611319  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.611348  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611489  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.611709  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.611885  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.612026  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.612174  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.612326  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.612341  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:01:20.847014  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:01:20.847049  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:01:20.847062  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetURL
	I1212 00:01:20.848448  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using libvirt version 6000000
	I1212 00:01:20.850813  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851216  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.851246  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851443  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:01:20.851459  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:01:20.851469  106017 client.go:171] duration metric: took 23.968343391s to LocalClient.Create
	I1212 00:01:20.851499  106017 start.go:167] duration metric: took 23.968416391s to libmachine.API.Create "ha-565823"
	I1212 00:01:20.851513  106017 start.go:293] postStartSetup for "ha-565823-m03" (driver="kvm2")
	I1212 00:01:20.851525  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:01:20.851547  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:20.851812  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:01:20.851848  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.854066  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854470  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.854498  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854683  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.854881  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.855047  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.855202  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.942769  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:01:20.947268  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:01:20.947295  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:01:20.947350  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:01:20.947427  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:01:20.947438  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:01:20.947517  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:01:20.957067  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:20.982552  106017 start.go:296] duration metric: took 131.024484ms for postStartSetup
	I1212 00:01:20.982610  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:20.983169  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.985456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.985914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.985943  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.986219  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:01:20.986450  106017 start.go:128] duration metric: took 24.12157496s to createHost
	I1212 00:01:20.986480  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.988832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989169  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.989192  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989296  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.989476  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989596  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989695  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.989852  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.990012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.990022  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:01:21.104340  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961681.076284817
	
	I1212 00:01:21.104366  106017 fix.go:216] guest clock: 1733961681.076284817
	I1212 00:01:21.104376  106017 fix.go:229] Guest: 2024-12-12 00:01:21.076284817 +0000 UTC Remote: 2024-12-12 00:01:20.986466192 +0000 UTC m=+151.148293246 (delta=89.818625ms)
	I1212 00:01:21.104397  106017 fix.go:200] guest clock delta is within tolerance: 89.818625ms
	I1212 00:01:21.104403  106017 start.go:83] releasing machines lock for "ha-565823-m03", held for 24.239651482s
	I1212 00:01:21.104427  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.104703  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:21.107255  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.107654  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.107680  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.109803  106017 out.go:177] * Found network options:
	I1212 00:01:21.111036  106017 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.103
	W1212 00:01:21.112272  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.112293  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.112306  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112787  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112963  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.113063  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:01:21.113107  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	W1212 00:01:21.113169  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.113192  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.113246  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:01:21.113266  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:21.115806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.115895  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116242  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116269  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116313  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116334  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116399  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116570  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116593  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116694  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116713  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116861  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116856  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.116989  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.354040  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:01:21.360555  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:01:21.360632  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:01:21.379750  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:01:21.379780  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:01:21.379863  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:01:21.395389  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:01:21.409350  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:01:21.409431  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:01:21.425472  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:01:21.440472  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:01:21.567746  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:01:21.711488  106017 docker.go:233] disabling docker service ...
	I1212 00:01:21.711577  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:01:21.727302  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:01:21.740916  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:01:21.878118  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:01:22.013165  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:01:22.031377  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:01:22.050768  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:01:22.050841  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.062469  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:01:22.062542  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.074854  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.085834  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.096567  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:01:22.110009  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.121122  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.139153  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.150221  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:01:22.160252  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:01:22.160329  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:01:22.175082  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:01:22.185329  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:22.327197  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:01:22.421776  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:01:22.421853  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:01:22.427874  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:01:22.427937  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:01:22.432412  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:01:22.478561  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:01:22.478659  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.507894  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.541025  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:01:22.542600  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:01:22.544205  106017 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.103
	I1212 00:01:22.545527  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:22.548679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549115  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:22.549143  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549402  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:01:22.553987  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:22.567227  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:01:22.567647  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:22.568059  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.568178  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.583960  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I1212 00:01:22.584451  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.584977  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.585002  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.585378  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.585624  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:01:22.587277  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:22.587636  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.587686  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.602128  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1212 00:01:22.602635  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.603141  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.603163  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.603490  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.603676  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:22.603824  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.95
	I1212 00:01:22.603837  106017 certs.go:194] generating shared ca certs ...
	I1212 00:01:22.603856  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.603989  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:01:22.604025  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:01:22.604035  106017 certs.go:256] generating profile certs ...
	I1212 00:01:22.604113  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:01:22.604138  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c
	I1212 00:01:22.604153  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.95 192.168.39.254]
	I1212 00:01:22.747110  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c ...
	I1212 00:01:22.747151  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c: {Name:mke6cc66706783f55b7ebb6ba30cc07d7c6eb29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747333  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c ...
	I1212 00:01:22.747345  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c: {Name:mk0abaf339db164c799eddef60276ad5fb5ed33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747431  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:01:22.747642  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:01:22.747827  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:01:22.747853  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:01:22.747874  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:01:22.747894  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:01:22.747911  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:01:22.747929  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:01:22.747949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:01:22.747967  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:01:22.767751  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:01:22.767871  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:01:22.767924  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:01:22.767939  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:01:22.767972  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:01:22.768009  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:01:22.768041  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:01:22.768088  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:22.768123  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:22.768140  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:01:22.768153  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:01:22.768246  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:22.771620  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772074  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:22.772105  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:22.772487  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:22.772661  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:22.772805  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:22.855976  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:01:22.862422  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:01:22.875336  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:01:22.881430  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:01:22.892620  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:01:22.897804  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:01:22.910746  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:01:22.916511  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:01:22.927437  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:01:22.932403  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:01:22.945174  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:01:22.949699  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:01:22.963425  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:01:22.991332  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:01:23.014716  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:01:23.038094  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:01:23.062120  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1212 00:01:23.086604  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:01:23.110420  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:01:23.136037  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:01:23.162577  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:01:23.188311  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:01:23.211713  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:01:23.235230  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:01:23.253375  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:01:23.271455  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:01:23.289505  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:01:23.307850  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:01:23.325848  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:01:23.344038  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:01:23.362393  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:01:23.368722  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:01:23.380405  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385472  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385534  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.392130  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:01:23.405241  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:01:23.418140  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422762  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422819  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.428754  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:01:23.441496  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:01:23.454394  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459170  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459227  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.465192  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:01:23.476720  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:01:23.481551  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:01:23.481615  106017 kubeadm.go:934] updating node {m03 192.168.39.95 8443 v1.31.2 crio true true} ...
	I1212 00:01:23.481715  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:01:23.481752  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:01:23.481784  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:01:23.499895  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:01:23.499971  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:01:23.500042  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.510617  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:01:23.510681  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.520696  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1212 00:01:23.520748  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:01:23.520697  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1212 00:01:23.520779  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520698  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:01:23.520844  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.520847  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520904  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.539476  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539619  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539628  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:01:23.539658  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:01:23.539704  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:01:23.539735  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:01:23.554300  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:01:23.554341  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:01:24.410276  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:01:24.421207  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 00:01:24.438691  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:01:24.456935  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:01:24.474104  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:01:24.478799  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:24.492116  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:24.635069  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:24.653898  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:24.654454  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:24.654529  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:24.669805  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 00:01:24.670391  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:24.671018  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:24.671047  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:24.671400  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:24.671580  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:24.671761  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:01:24.671883  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:01:24.671905  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:24.675034  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675479  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:24.675501  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675693  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:24.675871  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:24.676006  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:24.676127  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:24.845860  106017 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:24.845904  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I1212 00:01:47.124612  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (22.27867542s)
	I1212 00:01:47.124662  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:01:47.623528  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m03 minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:01:47.763869  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:01:47.919307  106017 start.go:319] duration metric: took 23.247542297s to joinCluster
	I1212 00:01:47.919407  106017 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:47.919784  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:47.920983  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:01:47.922471  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:48.195755  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:48.249445  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:01:48.249790  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:01:48.249881  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:01:48.250202  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:01:48.250300  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.250311  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.250329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.250338  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.255147  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:48.750647  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.750680  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.750691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.750699  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.755066  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:49.251152  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.251203  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.251216  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.251222  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.254927  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:49.751403  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.751424  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.751432  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.751436  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.754669  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.250595  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.250620  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.250629  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.250633  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.254009  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.254537  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:50.751206  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.751237  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.751250  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.751256  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.755159  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:51.250921  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.250950  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.250961  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.250967  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.255349  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:51.751245  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.751270  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.751283  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.751290  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.755162  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.250889  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.250916  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.250929  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.250935  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.254351  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.255115  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:52.750458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.750481  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.750492  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.750499  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.753763  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:53.251029  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.251058  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.251071  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.251077  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.256338  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:01:53.751364  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.751389  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.751401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.751414  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.754657  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.250629  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.250665  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.250675  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.250680  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.254457  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.255509  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:54.750450  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.750484  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.750496  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.750502  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.753928  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.251309  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.251338  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.251347  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.251351  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.254751  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.751050  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.751076  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.751089  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.751093  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.755810  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:56.250473  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.250504  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.250524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.250530  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.253711  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.751414  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.751435  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.751444  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.751449  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.755218  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.755864  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:57.251118  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.251142  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.251150  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.251154  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.254747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:57.750776  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.750806  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.750817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.750829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.754143  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.251295  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.251320  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.251329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.251333  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.254626  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.750576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.750599  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.750608  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.750611  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.754105  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.251173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.251200  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.251213  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.254355  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.255121  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:59.750953  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.750977  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.750985  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.750989  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.754627  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.250978  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.251004  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.251013  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.251016  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.254467  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.750877  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.750901  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.750912  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.750918  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.754221  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.251370  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.251393  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.251401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.251405  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.254805  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.255406  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:01.750655  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.750676  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.750684  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.750690  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.753736  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.251367  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.251390  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.251399  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.251403  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.255039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.750915  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.750948  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.750958  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.750964  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.754145  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:03.250760  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.250788  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.250798  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.250805  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.260534  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:03.261313  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:03.750548  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.750571  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.750582  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.750587  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.753887  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.250808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.250830  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.250838  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.250841  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.254163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.750428  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.750453  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.750464  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.750469  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.754235  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.251014  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.251038  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.251053  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.251061  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.254268  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.751257  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.751286  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.751300  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.751309  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.754346  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.755137  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:06.250474  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.250500  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.250510  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.250515  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.253901  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:06.751012  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.751043  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.751062  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.751067  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.755777  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:07.250458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.250481  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.250489  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.250494  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.254349  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.751140  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.751164  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.751172  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.751178  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.754545  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.755268  106017 node_ready.go:49] node "ha-565823-m03" has status "Ready":"True"
	I1212 00:02:07.755289  106017 node_ready.go:38] duration metric: took 19.505070997s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:02:07.755298  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:07.755371  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:07.755381  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.755388  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.755394  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.764865  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:07.771847  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.771957  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:02:07.771969  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.771979  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.771985  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.774662  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.775180  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.775197  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.775207  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.775212  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.778204  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.778657  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.778673  106017 pod_ready.go:82] duration metric: took 6.798091ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778684  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778739  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:02:07.778749  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.778759  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.778766  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.780968  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.781650  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.781667  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.781674  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.781679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.783908  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.784542  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.784564  106017 pod_ready.go:82] duration metric: took 5.872725ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784576  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784636  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:02:07.784644  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.784651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.784657  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.786892  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.787666  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.787681  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.787688  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.787694  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.789880  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.790470  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.790486  106017 pod_ready.go:82] duration metric: took 5.899971ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790494  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790537  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:02:07.790545  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.790552  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.790555  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.793137  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.793764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:07.793781  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.793791  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.793799  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.796241  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.796610  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.796625  106017 pod_ready.go:82] duration metric: took 6.124204ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.796636  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.952109  106017 request.go:632] Waited for 155.381921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952174  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952179  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.952187  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.952193  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.955641  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.151556  106017 request.go:632] Waited for 195.239119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151668  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151684  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.151694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.151702  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.154961  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.155639  106017 pod_ready.go:93] pod "etcd-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.155660  106017 pod_ready.go:82] duration metric: took 359.016335ms for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.155677  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.351679  106017 request.go:632] Waited for 195.932687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351780  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351790  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.351808  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.351821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.355049  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.552214  106017 request.go:632] Waited for 196.357688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552278  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552283  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.552291  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.552295  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.555420  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.555971  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.555995  106017 pod_ready.go:82] duration metric: took 400.310286ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.556009  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.752055  106017 request.go:632] Waited for 195.936446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752134  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752141  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.752152  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.752161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.755742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.951367  106017 request.go:632] Waited for 194.249731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951449  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951462  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.951477  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.951487  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.956306  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:08.956889  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.956911  106017 pod_ready.go:82] duration metric: took 400.890038ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.956924  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.152049  106017 request.go:632] Waited for 195.045457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152139  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152145  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.152153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.152158  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.155700  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.351978  106017 request.go:632] Waited for 195.381489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352057  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352066  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.352075  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.352081  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.355842  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.356358  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.356379  106017 pod_ready.go:82] duration metric: took 399.447689ms for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.356389  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.551411  106017 request.go:632] Waited for 194.933011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551471  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551476  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.551485  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.551489  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.554894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.751755  106017 request.go:632] Waited for 196.244381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751835  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751841  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.751848  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.751854  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.754952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.755722  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.755745  106017 pod_ready.go:82] duration metric: took 399.345607ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.755761  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.951966  106017 request.go:632] Waited for 196.120958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952068  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952080  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.952092  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.952104  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.955804  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.152052  106017 request.go:632] Waited for 195.597395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152141  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152152  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.152161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.152166  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.155038  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:10.155549  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.155569  106017 pod_ready.go:82] duration metric: took 399.796008ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.155583  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.351722  106017 request.go:632] Waited for 196.013906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351803  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351811  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.351826  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.351837  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.355190  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.551684  106017 request.go:632] Waited for 195.377569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551816  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.551824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.551829  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.555651  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.556178  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.556199  106017 pod_ready.go:82] duration metric: took 400.605936ms for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.556213  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.751531  106017 request.go:632] Waited for 195.242482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751632  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751654  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.751669  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.751679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.755253  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.951536  106017 request.go:632] Waited for 195.352907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951607  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951622  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.951633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.951641  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.954707  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.955175  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.955193  106017 pod_ready.go:82] duration metric: took 398.973413ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.955204  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.151212  106017 request.go:632] Waited for 195.914198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151269  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151274  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.151282  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.151285  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.154675  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.351669  106017 request.go:632] Waited for 196.350446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351765  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351776  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.351788  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.351796  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.354976  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.355603  106017 pod_ready.go:93] pod "kube-proxy-klpqs" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.355620  106017 pod_ready.go:82] duration metric: took 400.410567ms for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.355631  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.551803  106017 request.go:632] Waited for 196.076188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551880  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551892  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.551903  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.551915  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.555786  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.751843  106017 request.go:632] Waited for 195.375551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751907  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751912  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.751919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.751924  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.755210  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.755911  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.755936  106017 pod_ready.go:82] duration metric: took 400.297319ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.755951  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.951789  106017 request.go:632] Waited for 195.74885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951866  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951874  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.951891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.951904  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.955633  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.152006  106017 request.go:632] Waited for 195.692099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152097  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152112  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.152120  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.152125  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.155247  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.155984  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.156005  106017 pod_ready.go:82] duration metric: took 400.045384ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.156015  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.352045  106017 request.go:632] Waited for 195.938605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352121  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352126  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.352134  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.352143  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.355894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.551904  106017 request.go:632] Waited for 195.351995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551970  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551977  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.551988  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.551993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.555652  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.556289  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.556309  106017 pod_ready.go:82] duration metric: took 400.287227ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.556319  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.751148  106017 request.go:632] Waited for 194.747976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751231  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.751244  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.751260  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.754576  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.951572  106017 request.go:632] Waited for 196.386091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951678  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.951689  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.951693  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.954814  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.955311  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.955329  106017 pod_ready.go:82] duration metric: took 398.995551ms for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.955348  106017 pod_ready.go:39] duration metric: took 5.200033872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:12.955369  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:02:12.955437  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:02:12.971324  106017 api_server.go:72] duration metric: took 25.051879033s to wait for apiserver process to appear ...
	I1212 00:02:12.971354  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:02:12.971379  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:02:12.977750  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:02:12.977832  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:02:12.977843  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.977856  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.977863  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.978833  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:02:12.978904  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:02:12.978918  106017 api_server.go:131] duration metric: took 7.558877ms to wait for apiserver health ...
	I1212 00:02:12.978926  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:02:13.151199  106017 request.go:632] Waited for 172.198927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151292  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151303  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.151316  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.151325  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.157197  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:02:13.164153  106017 system_pods.go:59] 24 kube-system pods found
	I1212 00:02:13.164182  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.164187  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.164191  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.164194  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.164197  106017 system_pods.go:61] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.164200  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.164203  106017 system_pods.go:61] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.164206  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.164209  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.164211  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.164214  106017 system_pods.go:61] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.164218  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.164221  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.164224  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.164227  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.164230  106017 system_pods.go:61] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.164233  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.164236  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.164240  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.164243  106017 system_pods.go:61] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.164246  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.164249  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.164251  106017 system_pods.go:61] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.164254  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.164259  106017 system_pods.go:74] duration metric: took 185.327636ms to wait for pod list to return data ...
	I1212 00:02:13.164271  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:02:13.351702  106017 request.go:632] Waited for 187.33366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351785  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351793  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.351804  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.351814  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.355589  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.355716  106017 default_sa.go:45] found service account: "default"
	I1212 00:02:13.355732  106017 default_sa.go:55] duration metric: took 191.453257ms for default service account to be created ...
	I1212 00:02:13.355741  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:02:13.552179  106017 request.go:632] Waited for 196.355674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552246  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552253  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.552265  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.552274  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.558546  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:02:13.567311  106017 system_pods.go:86] 24 kube-system pods found
	I1212 00:02:13.567335  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.567341  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.567345  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.567349  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.567352  106017 system_pods.go:89] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.567355  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.567359  106017 system_pods.go:89] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.567362  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.567366  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.567369  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.567373  106017 system_pods.go:89] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.567377  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.567380  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.567384  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.567387  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.567390  106017 system_pods.go:89] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.567393  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.567396  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.567400  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.567404  106017 system_pods.go:89] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.567406  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.567411  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.567416  106017 system_pods.go:89] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.567419  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.567425  106017 system_pods.go:126] duration metric: took 211.677185ms to wait for k8s-apps to be running ...
	I1212 00:02:13.567435  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:02:13.567479  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:02:13.584100  106017 system_svc.go:56] duration metric: took 16.645631ms WaitForService to wait for kubelet
	I1212 00:02:13.584137  106017 kubeadm.go:582] duration metric: took 25.664696546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:02:13.584164  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:02:13.751620  106017 request.go:632] Waited for 167.335283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751682  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751687  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.751694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.751707  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.755649  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.756501  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756522  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756532  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756535  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756538  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756541  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756545  106017 node_conditions.go:105] duration metric: took 172.375714ms to run NodePressure ...
	I1212 00:02:13.756565  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:02:13.756588  106017 start.go:255] writing updated cluster config ...
	I1212 00:02:13.756868  106017 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:13.808453  106017 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 00:02:13.810275  106017 out.go:177] * Done! kubectl is now configured to use "ha-565823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.505349499Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-x4p94,Uid:6c1cc1db-013c-4f02-bc24-0e633c565129,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961735326351526,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T00:02:14.712914268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b87f311a-6a5e-42bd-8091-6b771551e24c,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1733961597746566064,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-11T23:59:57.423519655Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-mqzbv,Uid:0103eb36-35d9-48da-9244-89cc2ea25ec4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961597745009410,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-11T23:59:57.423730659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-4q46c,Uid:0b135b50-44c6-455c-85c0-d72033038d11,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1733961597719162269,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b135b50-44c6-455c-85c0-d72033038d11,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-11T23:59:57.412478413Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&PodSandboxMetadata{Name:kindnet-hz9rk,Uid:1198ce2d-aac5-4e9f-9605-22e06dc18348,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961580920269240,Labels:map[string]string{app: kindnet,controller-revision-hash: 7dff7cd75d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-12-11T23:59:39.976724750Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&PodSandboxMetadata{Name:kube-proxy-hr5qc,Uid:88445d08-4d68-4ca2-b91a-125924b109da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961580911912840,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-11T23:59:39.987971168Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565823,Uid:fae40d20051ab63ee6c84f456649100b,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1733961568885546261,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.19:8443,kubernetes.io/config.hash: fae40d20051ab63ee6c84f456649100b,kubernetes.io/config.seen: 2024-12-11T23:59:28.395387781Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565823,Uid:eaa7a8577c4c0d2b65a93222694855a4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961568881356268,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c
0d2b65a93222694855a4,},Annotations:map[string]string{kubernetes.io/config.hash: eaa7a8577c4c0d2b65a93222694855a4,kubernetes.io/config.seen: 2024-12-11T23:59:28.395391091Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565823,Uid:0fddadc76c4b2da11fc48dabaf0f7ded,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961568877045946,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0fddadc76c4b2da11fc48dabaf0f7ded,kubernetes.io/config.seen: 2024-12-11T23:59:28.395389010Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65b
cdfcce237cda7a9bc51e50,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565823,Uid:a41f6f20361d8a099d64b4adbb7842d4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961568875663855,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a41f6f20361d8a099d64b4adbb7842d4,kubernetes.io/config.seen: 2024-12-11T23:59:28.395390281Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&PodSandboxMetadata{Name:etcd-ha-565823,Uid:a76b767f5584521bc3a8a4e6679c0b2e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733961568863206855,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565823,io
.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.19:2379,kubernetes.io/config.hash: a76b767f5584521bc3a8a4e6679c0b2e,kubernetes.io/config.seen: 2024-12-11T23:59:28.395384297Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=20485ebd-081e-46ac-946b-1025cea995fe name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.506023667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f320b8ec-c3b0-4b07-a27f-9fa1c6967151 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.506168273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f320b8ec-c3b0-4b07-a27f-9fa1c6967151 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.506473601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f320b8ec-c3b0-4b07-a27f-9fa1c6967151 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.508015553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6198284-623e-47fb-b327-430e07a515fd name=/runtime.v1.RuntimeService/Version
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.508144001Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6198284-623e-47fb-b327-430e07a515fd name=/runtime.v1.RuntimeService/Version
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.509038202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d163cf67-f1df-4924-ac8d-2150eec6fd62 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.509586562Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961957509567596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d163cf67-f1df-4924-ac8d-2150eec6fd62 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.510124720Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd17bbb1-fce1-43f3-a2aa-f43c05d57db3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.510174749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd17bbb1-fce1-43f3-a2aa-f43c05d57db3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.510419123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd17bbb1-fce1-43f3-a2aa-f43c05d57db3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.552013701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f396c0a-db57-401e-bb90-84b54a8ea752 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.552199821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f396c0a-db57-401e-bb90-84b54a8ea752 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.553391212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01ac4e68-dd59-42f7-9d60-1fead51d72e3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.554157797Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961957554122836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01ac4e68-dd59-42f7-9d60-1fead51d72e3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.554657274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e081d0db-c584-4080-b06c-3742094f9ff5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.554712781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e081d0db-c584-4080-b06c-3742094f9ff5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.555008106Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e081d0db-c584-4080-b06c-3742094f9ff5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.600957943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04cc3144-3eaa-40a2-a0ab-b67ad04f81f3 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.601030721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04cc3144-3eaa-40a2-a0ab-b67ad04f81f3 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.602799700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=828357c6-4e02-47fb-9f8b-b884a291bcf1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.603357599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961957603323574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=828357c6-4e02-47fb-9f8b-b884a291bcf1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.603764525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d871683-9151-4cee-b65f-80f3622c54d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.603825902Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d871683-9151-4cee-b65f-80f3622c54d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:05:57 ha-565823 crio[664]: time="2024-12-12 00:05:57.604046410Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d871683-9151-4cee-b65f-80f3622c54d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0043af06cb92       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0d77818a442ce       busybox-7dff88458-x4p94
	999ac64245591       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   ab4dd7022ef59       coredns-7c65d6cfc9-mqzbv
	0beb663c1a28f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   2787b4f317bfa       coredns-7c65d6cfc9-4q46c
	ba4c8c97ea090       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   4161eb9de6ddb       storage-provisioner
	bfdacc6be0aee       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   332b05e74370f       kindnet-hz9rk
	514637eeaa812       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   920e405616cde       kube-proxy-hr5qc
	768be9c254101       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   87c6df22f8976       kube-vip-ha-565823
	452c6d19b2de9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   0ab557e831fb3       kube-controller-manager-ha-565823
	743ae8ccc81f5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e6c331c3b3439       etcd-ha-565823
	4f25ff314c2e8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d851e6de61a68       kube-apiserver-ha-565823
	b28e7b492cfe7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6c5b082d1924       kube-scheduler-ha-565823
	
	
	==> coredns [0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3] <==
	[INFO] 10.244.1.2:40894 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004450385s
	[INFO] 10.244.1.2:47929 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225565s
	[INFO] 10.244.1.2:51252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126773s
	[INFO] 10.244.1.2:47545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126535s
	[INFO] 10.244.1.2:37654 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119814s
	[INFO] 10.244.2.2:44808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015021s
	[INFO] 10.244.2.2:48775 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815223s
	[INFO] 10.244.2.2:56148 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132782s
	[INFO] 10.244.2.2:57998 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133493s
	[INFO] 10.244.0.4:39053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087907s
	[INFO] 10.244.0.4:34059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001091775s
	[INFO] 10.244.1.2:56415 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000835348s
	[INFO] 10.244.1.2:46751 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114147s
	[INFO] 10.244.1.2:35096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100606s
	[INFO] 10.244.2.2:40358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136169s
	[INFO] 10.244.2.2:56318 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204673s
	[INFO] 10.244.0.4:34528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012651s
	[INFO] 10.244.1.2:56678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145563s
	[INFO] 10.244.1.2:43671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000363816s
	[INFO] 10.244.1.2:48047 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136942s
	[INFO] 10.244.1.2:35425 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019653s
	[INFO] 10.244.2.2:59862 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112519s
	[INFO] 10.244.0.4:33935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108695s
	[INFO] 10.244.0.4:51044 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115709s
	[INFO] 10.244.0.4:40489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092799s
	
	
	==> coredns [999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481] <==
	[INFO] 10.244.0.4:33301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137834s
	[INFO] 10.244.0.4:55709 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001541208s
	[INFO] 10.244.0.4:59133 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001387137s
	[INFO] 10.244.1.2:35268 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004904013s
	[INFO] 10.244.1.2:45390 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166839s
	[INFO] 10.244.2.2:51385 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248421s
	[INFO] 10.244.2.2:33701 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001310625s
	[INFO] 10.244.2.2:48335 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124081s
	[INFO] 10.244.2.2:58439 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000278252s
	[INFO] 10.244.0.4:51825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131036s
	[INFO] 10.244.0.4:54179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001798071s
	[INFO] 10.244.0.4:38851 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094604s
	[INFO] 10.244.0.4:48660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050194s
	[INFO] 10.244.0.4:57598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082654s
	[INFO] 10.244.0.4:43576 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100662s
	[INFO] 10.244.1.2:60988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015105s
	[INFO] 10.244.2.2:60481 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130341s
	[INFO] 10.244.2.2:48427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079579s
	[INFO] 10.244.0.4:39760 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227961s
	[INFO] 10.244.0.4:48093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090061s
	[INFO] 10.244.0.4:37075 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076033s
	[INFO] 10.244.2.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258305s
	[INFO] 10.244.2.2:40866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177114s
	[INFO] 10.244.2.2:58880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137362s
	[INFO] 10.244.0.4:60821 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179152s
	
	
	==> describe nodes <==
	Name:               ha-565823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:59:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:05:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-565823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 344476ebea784ce5952c6b9d7486bfc2
	  System UUID:                344476eb-ea78-4ce5-952c-6b9d7486bfc2
	  Boot ID:                    cf8379f5-6946-439d-a3d4-fa7d39c2dea7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4p94              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-4q46c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 coredns-7c65d6cfc9-mqzbv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m17s
	  kube-system                 etcd-ha-565823                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-hz9rk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-565823             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-565823    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-hr5qc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-565823             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-565823                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m16s  kube-proxy       
	  Normal  Starting                 6m22s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s  kubelet          Node ha-565823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s  kubelet          Node ha-565823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s  kubelet          Node ha-565823 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m18s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-565823 status is now: NodeReady
	  Normal  RegisteredNode           5m20s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  RegisteredNode           4m4s   node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	
	
	Name:               ha-565823-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:00:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:03:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-565823-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9273c598fccb4678bf93616ea428fab5
	  System UUID:                9273c598-fccb-4678-bf93-616ea428fab5
	  Boot ID:                    73eb7add-f6da-422d-ad45-9773172878c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nsw2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-565823-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-kr5js                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-565823-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-565823-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-proxy-p2lsd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-565823-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-565823-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-565823-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m23s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-565823-m02 status is now: NodeNotReady
	
	
	Name:               ha-565823-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:01:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:05:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:02:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-565823-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7cdc3cdb36e495abaa3ddda542ce8f6
	  System UUID:                a7cdc3cd-b36e-495a-baa3-ddda542ce8f6
	  Boot ID:                    e8069ced-7862-4741-8f56-298b003d0b4d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s8nmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-565823-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m11s
	  kube-system                 kindnet-jffrr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m13s
	  kube-system                 kube-apiserver-ha-565823-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-controller-manager-ha-565823-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-klpqs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-scheduler-ha-565823-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-565823-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m13s (x8 over 4m13s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s (x8 over 4m13s)  kubelet          Node ha-565823-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s (x7 over 4m13s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m4s                   node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	
	
	Name:               ha-565823-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_02_54_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:02:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:05:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:03:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-565823-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9da6268e700e4cc18f576f10f66d598f
	  System UUID:                9da6268e-700e-4cc1-8f57-6f10f66d598f
	  Boot ID:                    20440ea1-d260-49fc-a678-9a23de1ac4f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6qk4d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m4s
	  kube-system                 kube-proxy-j59sb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node ha-565823-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-565823-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053078] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041942] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec11 23:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.625477] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.503596] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.061991] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056761] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.187047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.124910] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.280035] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.149659] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.048783] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.069316] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.737553] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.583447] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +5.823487] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.790300] kauditd_printk_skb: 34 callbacks suppressed
	[Dec12 00:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b] <==
	{"level":"warn","ts":"2024-12-12T00:05:57.856501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.864578Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.880835Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.888877Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.893673Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.906791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.915722Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.922971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.929454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.932983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.939756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.947214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.953969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.955791Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.957285Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.960558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.966455Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.978262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.985843Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.989002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.993533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:57.999004Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:58.004785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:58.013502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:05:58.056452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:05:58 up 7 min,  0 users,  load average: 0.07, 0.17, 0.09
	Linux ha-565823 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098] <==
	I1212 00:05:27.120565       1 main.go:301] handling current node
	I1212 00:05:37.119553       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:37.119646       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:37.119990       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:37.120019       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:05:37.120347       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:37.120377       1 main.go:301] handling current node
	I1212 00:05:37.120407       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:37.120430       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119691       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:47.119737       1 main.go:301] handling current node
	I1212 00:05:47.119753       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:47.119758       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119987       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:47.119994       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:47.120217       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:47.120242       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:05:57.128438       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:57.128810       1 main.go:301] handling current node
	I1212 00:05:57.128927       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:57.128989       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:57.129767       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:57.129834       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:57.130023       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:57.130046       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95] <==
	I1211 23:59:33.823962       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1211 23:59:33.879965       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1211 23:59:33.896294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I1211 23:59:33.897349       1 controller.go:615] quota admission added evaluator for: endpoints
	I1211 23:59:33.902931       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 23:59:34.842734       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1211 23:59:35.374409       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1211 23:59:35.395837       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1211 23:59:35.560177       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1211 23:59:39.944410       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1211 23:59:40.344123       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1212 00:02:22.272920       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55802: use of closed network connection
	E1212 00:02:22.464756       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55828: use of closed network connection
	E1212 00:02:22.651355       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55850: use of closed network connection
	E1212 00:02:23.038043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55874: use of closed network connection
	E1212 00:02:23.226745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55900: use of closed network connection
	E1212 00:02:23.410000       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55904: use of closed network connection
	E1212 00:02:23.591256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55924: use of closed network connection
	E1212 00:02:23.770667       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55932: use of closed network connection
	E1212 00:02:24.076679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55962: use of closed network connection
	E1212 00:02:24.252739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55982: use of closed network connection
	E1212 00:02:24.461578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56012: use of closed network connection
	E1212 00:02:24.646238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56034: use of closed network connection
	E1212 00:02:24.817848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56044: use of closed network connection
	E1212 00:02:24.999617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56060: use of closed network connection
	
	
	==> kube-controller-manager [452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1] <==
	I1212 00:02:54.484626       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565823-m04" podCIDRs=["10.244.3.0/24"]
	I1212 00:02:54.484689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.484721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.500323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.636444       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565823-m04"
	I1212 00:02:54.652045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.687694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:55.082775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.485970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.555718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.675906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.734910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:04.836593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:03:16.485293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:17.501671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:25.341676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:04:14.668472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.669356       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:04:14.705380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.785686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.151428ms"
	I1212 00:04:14.785837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="78.406µs"
	I1212 00:04:18.764949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:19.939887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	
	
	==> kube-proxy [514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1211 23:59:41.687183       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1211 23:59:41.713699       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E1211 23:59:41.713883       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:59:41.760766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1211 23:59:41.760924       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:59:41.761009       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:59:41.764268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:59:41.765555       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:59:41.765710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:59:41.768630       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:59:41.769016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:59:41.769876       1 config.go:199] "Starting service config controller"
	I1211 23:59:41.769889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:59:41.771229       1 config.go:328] "Starting node config controller"
	I1211 23:59:41.771259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:59:41.871443       1 shared_informer.go:320] Caches are synced for node config
	I1211 23:59:41.871633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:59:41.871849       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4] <==
	E1211 23:59:33.413263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1211 23:59:35.297693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:02:14.658309       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="bc1a3365-d32e-42cc-b58c-95a59e72d54b" pod="default/busybox-7dff88458-nsw2n" assumedNode="ha-565823-m02" currentNode="ha-565823-m03"
	E1212 00:02:14.675240       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m03"
	E1212 00:02:14.679553       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bc1a3365-d32e-42cc-b58c-95a59e72d54b(default/busybox-7dff88458-nsw2n) was assumed on ha-565823-m03 but assigned to ha-565823-m02" pod="default/busybox-7dff88458-nsw2n"
	E1212 00:02:14.680513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" pod="default/busybox-7dff88458-nsw2n"
	I1212 00:02:14.680708       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m02"
	E1212 00:02:14.899144       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-vn6xg is already present in the active queue" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:14.936687       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-vn6xg\" not found" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:54.574668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.578200       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.581395       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b52adb65-9292-42b8-bca8-b4a44c756e15(kube-system/kube-proxy-j59sb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j59sb"
	E1212 00:02:54.582857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-j59sb"
	I1212 00:02:54.582977       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.583674       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ba90dda-f093-4ba3-abad-427394ebe334(kube-system/kindnet-6qk4d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6qk4d"
	E1212 00:02:54.583943       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-6qk4d"
	I1212 00:02:54.584002       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.639291       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.640439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2061489e-9108-4e76-af40-2fcc1540357b(kube-system/kube-proxy-lbbhs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lbbhs"
	E1212 00:02:54.640623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-lbbhs"
	I1212 00:02:54.640743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.639802       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	E1212 00:02:54.641599       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bd86f21-f17e-4d19-8bac-53393aecda0b(kube-system/kindnet-pfdgd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pfdgd"
	E1212 00:02:54.641728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-pfdgd"
	I1212 00:02:54.641865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	
	
	==> kubelet <==
	Dec 12 00:04:35 ha-565823 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 00:04:35 ha-565823 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 00:04:35 ha-565823 kubelet[1304]: E1212 00:04:35.644561    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961875641522910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:35 ha-565823 kubelet[1304]: E1212 00:04:35.644914    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961875641522910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646672    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646986    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649177    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649229    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650905    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650951    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652272    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652343    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.654671    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.655016    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.529805    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657687    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657712    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659792    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659845    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.661887    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.662031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565823 -n ha-565823
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.387377499s)
ha_test.go:415: expected profile "ha-565823" in json of 'profile list' to have "Degraded" status but have "" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-565823\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-565823\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-565823\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.19\",\"Port\":8443,\"KubernetesVersion\"
:\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.103\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.95\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.247\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"lo
gviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\
"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565823 -n ha-565823
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 logs -n 25: (1.487504874s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m03_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m04 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp testdata/cp-test.txt                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m03 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565823 node stop m02 -v=7                                                     | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:58:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:58:49.879098  106017 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:58:49.879215  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879223  106017 out.go:358] Setting ErrFile to fd 2...
	I1211 23:58:49.879228  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879424  106017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:58:49.880067  106017 out.go:352] Setting JSON to false
	I1211 23:58:49.880934  106017 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9672,"bootTime":1733951858,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:58:49.881036  106017 start.go:139] virtualization: kvm guest
	I1211 23:58:49.883482  106017 out.go:177] * [ha-565823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:58:49.884859  106017 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:58:49.884853  106017 notify.go:220] Checking for updates...
	I1211 23:58:49.887649  106017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:58:49.889057  106017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:58:49.890422  106017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:49.891732  106017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:58:49.893196  106017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:58:49.894834  106017 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:58:49.929647  106017 out.go:177] * Using the kvm2 driver based on user configuration
	I1211 23:58:49.931090  106017 start.go:297] selected driver: kvm2
	I1211 23:58:49.931102  106017 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:58:49.931118  106017 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:58:49.931896  106017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.931980  106017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:58:49.946877  106017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:58:49.946925  106017 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:58:49.947184  106017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:58:49.947219  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:58:49.947291  106017 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1211 23:58:49.947306  106017 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:58:49.947387  106017 start.go:340] cluster config:
	{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1211 23:58:49.947534  106017 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.949244  106017 out.go:177] * Starting "ha-565823" primary control-plane node in "ha-565823" cluster
	I1211 23:58:49.950461  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:58:49.950504  106017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:58:49.950517  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:58:49.950593  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:58:49.950607  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:58:49.950924  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:58:49.950947  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json: {Name:mk87ab89a0730849be8d507f8c0453b4c014ad9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:58:49.951100  106017 start.go:360] acquireMachinesLock for ha-565823: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:58:49.951143  106017 start.go:364] duration metric: took 25.725µs to acquireMachinesLock for "ha-565823"
	I1211 23:58:49.951167  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:58:49.951248  106017 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:58:49.952920  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:58:49.953077  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:49.953130  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:49.967497  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I1211 23:58:49.967981  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:49.968550  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:58:49.968587  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:49.968981  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:49.969194  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:58:49.969410  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:58:49.969566  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:58:49.969614  106017 client.go:168] LocalClient.Create starting
	I1211 23:58:49.969660  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:58:49.969702  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969727  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969804  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:58:49.969833  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969852  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969875  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:58:49.969887  106017 main.go:141] libmachine: (ha-565823) Calling .PreCreateCheck
	I1211 23:58:49.970228  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:58:49.970579  106017 main.go:141] libmachine: Creating machine...
	I1211 23:58:49.970592  106017 main.go:141] libmachine: (ha-565823) Calling .Create
	I1211 23:58:49.970720  106017 main.go:141] libmachine: (ha-565823) Creating KVM machine...
	I1211 23:58:49.971894  106017 main.go:141] libmachine: (ha-565823) DBG | found existing default KVM network
	I1211 23:58:49.972543  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:49.972397  106042 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1211 23:58:49.972595  106017 main.go:141] libmachine: (ha-565823) DBG | created network xml: 
	I1211 23:58:49.972612  106017 main.go:141] libmachine: (ha-565823) DBG | <network>
	I1211 23:58:49.972619  106017 main.go:141] libmachine: (ha-565823) DBG |   <name>mk-ha-565823</name>
	I1211 23:58:49.972628  106017 main.go:141] libmachine: (ha-565823) DBG |   <dns enable='no'/>
	I1211 23:58:49.972641  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972653  106017 main.go:141] libmachine: (ha-565823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1211 23:58:49.972659  106017 main.go:141] libmachine: (ha-565823) DBG |     <dhcp>
	I1211 23:58:49.972666  106017 main.go:141] libmachine: (ha-565823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1211 23:58:49.972678  106017 main.go:141] libmachine: (ha-565823) DBG |     </dhcp>
	I1211 23:58:49.972689  106017 main.go:141] libmachine: (ha-565823) DBG |   </ip>
	I1211 23:58:49.972696  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972705  106017 main.go:141] libmachine: (ha-565823) DBG | </network>
	I1211 23:58:49.972742  106017 main.go:141] libmachine: (ha-565823) DBG | 
	I1211 23:58:49.977592  106017 main.go:141] libmachine: (ha-565823) DBG | trying to create private KVM network mk-ha-565823 192.168.39.0/24...
	I1211 23:58:50.045920  106017 main.go:141] libmachine: (ha-565823) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.045945  106017 main.go:141] libmachine: (ha-565823) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:58:50.045957  106017 main.go:141] libmachine: (ha-565823) DBG | private KVM network mk-ha-565823 192.168.39.0/24 created
	I1211 23:58:50.045974  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.045851  106042 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.046037  106017 main.go:141] libmachine: (ha-565823) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:58:50.332532  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.332355  106042 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa...
	I1211 23:58:50.607374  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607211  106042 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk...
	I1211 23:58:50.607405  106017 main.go:141] libmachine: (ha-565823) DBG | Writing magic tar header
	I1211 23:58:50.607418  106017 main.go:141] libmachine: (ha-565823) DBG | Writing SSH key tar header
	I1211 23:58:50.607425  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607336  106042 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.607436  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823
	I1211 23:58:50.607514  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:58:50.607560  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 (perms=drwx------)
	I1211 23:58:50.607571  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.607581  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:58:50.607606  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:58:50.607624  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:58:50.607642  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:58:50.607654  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home
	I1211 23:58:50.607666  106017 main.go:141] libmachine: (ha-565823) DBG | Skipping /home - not owner
	I1211 23:58:50.607678  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:58:50.607687  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:58:50.607693  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:58:50.607704  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:58:50.607717  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:50.608802  106017 main.go:141] libmachine: (ha-565823) define libvirt domain using xml: 
	I1211 23:58:50.608821  106017 main.go:141] libmachine: (ha-565823) <domain type='kvm'>
	I1211 23:58:50.608828  106017 main.go:141] libmachine: (ha-565823)   <name>ha-565823</name>
	I1211 23:58:50.608832  106017 main.go:141] libmachine: (ha-565823)   <memory unit='MiB'>2200</memory>
	I1211 23:58:50.608838  106017 main.go:141] libmachine: (ha-565823)   <vcpu>2</vcpu>
	I1211 23:58:50.608842  106017 main.go:141] libmachine: (ha-565823)   <features>
	I1211 23:58:50.608846  106017 main.go:141] libmachine: (ha-565823)     <acpi/>
	I1211 23:58:50.608850  106017 main.go:141] libmachine: (ha-565823)     <apic/>
	I1211 23:58:50.608857  106017 main.go:141] libmachine: (ha-565823)     <pae/>
	I1211 23:58:50.608868  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.608875  106017 main.go:141] libmachine: (ha-565823)   </features>
	I1211 23:58:50.608879  106017 main.go:141] libmachine: (ha-565823)   <cpu mode='host-passthrough'>
	I1211 23:58:50.608887  106017 main.go:141] libmachine: (ha-565823)   
	I1211 23:58:50.608891  106017 main.go:141] libmachine: (ha-565823)   </cpu>
	I1211 23:58:50.608898  106017 main.go:141] libmachine: (ha-565823)   <os>
	I1211 23:58:50.608902  106017 main.go:141] libmachine: (ha-565823)     <type>hvm</type>
	I1211 23:58:50.608977  106017 main.go:141] libmachine: (ha-565823)     <boot dev='cdrom'/>
	I1211 23:58:50.609011  106017 main.go:141] libmachine: (ha-565823)     <boot dev='hd'/>
	I1211 23:58:50.609024  106017 main.go:141] libmachine: (ha-565823)     <bootmenu enable='no'/>
	I1211 23:58:50.609036  106017 main.go:141] libmachine: (ha-565823)   </os>
	I1211 23:58:50.609043  106017 main.go:141] libmachine: (ha-565823)   <devices>
	I1211 23:58:50.609052  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='cdrom'>
	I1211 23:58:50.609063  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/boot2docker.iso'/>
	I1211 23:58:50.609074  106017 main.go:141] libmachine: (ha-565823)       <target dev='hdc' bus='scsi'/>
	I1211 23:58:50.609085  106017 main.go:141] libmachine: (ha-565823)       <readonly/>
	I1211 23:58:50.609094  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609105  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='disk'>
	I1211 23:58:50.609117  106017 main.go:141] libmachine: (ha-565823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:58:50.609133  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk'/>
	I1211 23:58:50.609144  106017 main.go:141] libmachine: (ha-565823)       <target dev='hda' bus='virtio'/>
	I1211 23:58:50.609154  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609164  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609176  106017 main.go:141] libmachine: (ha-565823)       <source network='mk-ha-565823'/>
	I1211 23:58:50.609187  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609198  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609209  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609221  106017 main.go:141] libmachine: (ha-565823)       <source network='default'/>
	I1211 23:58:50.609230  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609240  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609249  106017 main.go:141] libmachine: (ha-565823)     <serial type='pty'>
	I1211 23:58:50.609271  106017 main.go:141] libmachine: (ha-565823)       <target port='0'/>
	I1211 23:58:50.609292  106017 main.go:141] libmachine: (ha-565823)     </serial>
	I1211 23:58:50.609319  106017 main.go:141] libmachine: (ha-565823)     <console type='pty'>
	I1211 23:58:50.609342  106017 main.go:141] libmachine: (ha-565823)       <target type='serial' port='0'/>
	I1211 23:58:50.609358  106017 main.go:141] libmachine: (ha-565823)     </console>
	I1211 23:58:50.609368  106017 main.go:141] libmachine: (ha-565823)     <rng model='virtio'>
	I1211 23:58:50.609380  106017 main.go:141] libmachine: (ha-565823)       <backend model='random'>/dev/random</backend>
	I1211 23:58:50.609388  106017 main.go:141] libmachine: (ha-565823)     </rng>
	I1211 23:58:50.609393  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609399  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609404  106017 main.go:141] libmachine: (ha-565823)   </devices>
	I1211 23:58:50.609412  106017 main.go:141] libmachine: (ha-565823) </domain>
	I1211 23:58:50.609425  106017 main.go:141] libmachine: (ha-565823) 
	I1211 23:58:50.614253  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:5a:5d:6a in network default
	I1211 23:58:50.614867  106017 main.go:141] libmachine: (ha-565823) Ensuring networks are active...
	I1211 23:58:50.614888  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:50.615542  106017 main.go:141] libmachine: (ha-565823) Ensuring network default is active
	I1211 23:58:50.615828  106017 main.go:141] libmachine: (ha-565823) Ensuring network mk-ha-565823 is active
	I1211 23:58:50.616242  106017 main.go:141] libmachine: (ha-565823) Getting domain xml...
	I1211 23:58:50.616898  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:51.817451  106017 main.go:141] libmachine: (ha-565823) Waiting to get IP...
	I1211 23:58:51.818184  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:51.818533  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:51.818576  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:51.818514  106042 retry.go:31] will retry after 280.301496ms: waiting for machine to come up
	I1211 23:58:52.100046  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.100502  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.100533  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.100451  106042 retry.go:31] will retry after 276.944736ms: waiting for machine to come up
	I1211 23:58:52.378928  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.379349  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.379382  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.379295  106042 retry.go:31] will retry after 389.022589ms: waiting for machine to come up
	I1211 23:58:52.769835  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.770314  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.770357  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.770269  106042 retry.go:31] will retry after 542.492277ms: waiting for machine to come up
	I1211 23:58:53.313855  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:53.314281  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:53.314305  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:53.314231  106042 retry.go:31] will retry after 742.209465ms: waiting for machine to come up
	I1211 23:58:54.058032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.058453  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.058490  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.058433  106042 retry.go:31] will retry after 754.421967ms: waiting for machine to come up
	I1211 23:58:54.814555  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.814980  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.815017  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.814915  106042 retry.go:31] will retry after 802.576471ms: waiting for machine to come up
	I1211 23:58:55.619852  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:55.620325  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:55.620362  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:55.620271  106042 retry.go:31] will retry after 1.192308346s: waiting for machine to come up
	I1211 23:58:56.815553  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:56.816025  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:56.816050  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:56.815966  106042 retry.go:31] will retry after 1.618860426s: waiting for machine to come up
	I1211 23:58:58.436766  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:58.437231  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:58.437256  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:58.437186  106042 retry.go:31] will retry after 2.219805666s: waiting for machine to come up
	I1211 23:59:00.658607  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:00.659028  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:00.659058  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:00.658968  106042 retry.go:31] will retry after 1.768582626s: waiting for machine to come up
	I1211 23:59:02.429943  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:02.430433  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:02.430464  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:02.430369  106042 retry.go:31] will retry after 2.185532844s: waiting for machine to come up
	I1211 23:59:04.617032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:04.617473  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:04.617499  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:04.617419  106042 retry.go:31] will retry after 4.346976865s: waiting for machine to come up
	I1211 23:59:08.969389  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:08.969741  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:08.969760  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:08.969711  106042 retry.go:31] will retry after 4.969601196s: waiting for machine to come up
	I1211 23:59:13.943658  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944048  106017 main.go:141] libmachine: (ha-565823) Found IP for machine: 192.168.39.19
	I1211 23:59:13.944063  106017 main.go:141] libmachine: (ha-565823) Reserving static IP address...
	I1211 23:59:13.944071  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has current primary IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944392  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "ha-565823", mac: "52:54:00:2b:2e:da", ip: "192.168.39.19"} in network mk-ha-565823
	I1211 23:59:14.015315  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:14.015347  106017 main.go:141] libmachine: (ha-565823) Reserved static IP address: 192.168.39.19
	I1211 23:59:14.015425  106017 main.go:141] libmachine: (ha-565823) Waiting for SSH to be available...
	I1211 23:59:14.017689  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:14.018021  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823
	I1211 23:59:14.018050  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find defined IP address of network mk-ha-565823 interface with MAC address 52:54:00:2b:2e:da
	I1211 23:59:14.018183  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:14.018223  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:14.018268  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:14.018288  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:14.018327  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:14.021958  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: exit status 255: 
	I1211 23:59:14.021983  106017 main.go:141] libmachine: (ha-565823) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1211 23:59:14.021992  106017 main.go:141] libmachine: (ha-565823) DBG | command : exit 0
	I1211 23:59:14.022004  106017 main.go:141] libmachine: (ha-565823) DBG | err     : exit status 255
	I1211 23:59:14.022014  106017 main.go:141] libmachine: (ha-565823) DBG | output  : 
	I1211 23:59:17.023677  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:17.026110  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026503  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.026529  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026696  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:17.026723  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:17.026749  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:17.026776  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:17.026792  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:17.155941  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: <nil>: 
	I1211 23:59:17.156245  106017 main.go:141] libmachine: (ha-565823) KVM machine creation complete!
	I1211 23:59:17.156531  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:17.157110  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157306  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157460  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1211 23:59:17.157473  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:17.158855  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1211 23:59:17.158893  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1211 23:59:17.158902  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1211 23:59:17.158918  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.161015  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161305  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.161347  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161435  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.161600  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161751  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161869  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.162043  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.162241  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.162251  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1211 23:59:17.270900  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.270927  106017 main.go:141] libmachine: Detecting the provisioner...
	I1211 23:59:17.270938  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.273797  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274144  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.274170  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274323  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.274499  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274631  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274743  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.274871  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.275034  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.275045  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1211 23:59:17.388514  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1211 23:59:17.388598  106017 main.go:141] libmachine: found compatible host: buildroot
	I1211 23:59:17.388612  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1211 23:59:17.388622  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.388876  106017 buildroot.go:166] provisioning hostname "ha-565823"
	I1211 23:59:17.388901  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.389119  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.391763  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392089  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.392117  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392206  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.392374  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392583  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392750  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.392900  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.393085  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.393098  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823 && echo "ha-565823" | sudo tee /etc/hostname
	I1211 23:59:17.517872  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1211 23:59:17.517906  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.520794  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521115  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.521139  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521316  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.521505  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521649  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521748  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.521909  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.522131  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.522150  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:59:17.641444  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.641473  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1211 23:59:17.641523  106017 buildroot.go:174] setting up certificates
	I1211 23:59:17.641537  106017 provision.go:84] configureAuth start
	I1211 23:59:17.641550  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.641858  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:17.644632  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.644929  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.644969  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.645145  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.647106  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647440  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.647460  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647633  106017 provision.go:143] copyHostCerts
	I1211 23:59:17.647667  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647703  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1211 23:59:17.647712  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647777  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1211 23:59:17.647854  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647873  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1211 23:59:17.647879  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647903  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1211 23:59:17.647943  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647959  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1211 23:59:17.647965  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647985  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1211 23:59:17.648036  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823 san=[127.0.0.1 192.168.39.19 ha-565823 localhost minikube]
	I1211 23:59:17.803088  106017 provision.go:177] copyRemoteCerts
	I1211 23:59:17.803154  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:59:17.803180  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.806065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806383  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.806401  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806621  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.806836  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.806981  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.807172  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:17.894618  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 23:59:17.894691  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 23:59:17.921956  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 23:59:17.922023  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 23:59:17.948821  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 23:59:17.948890  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1211 23:59:17.975580  106017 provision.go:87] duration metric: took 334.027463ms to configureAuth
	I1211 23:59:17.975634  106017 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:59:17.975827  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:17.975904  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.978577  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.978850  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.978901  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.979082  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.979257  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979385  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979493  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.979692  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.979889  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.979912  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:59:18.235267  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:59:18.235313  106017 main.go:141] libmachine: Checking connection to Docker...
	I1211 23:59:18.235325  106017 main.go:141] libmachine: (ha-565823) Calling .GetURL
	I1211 23:59:18.236752  106017 main.go:141] libmachine: (ha-565823) DBG | Using libvirt version 6000000
	I1211 23:59:18.239115  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239502  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.239532  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239731  106017 main.go:141] libmachine: Docker is up and running!
	I1211 23:59:18.239753  106017 main.go:141] libmachine: Reticulating splines...
	I1211 23:59:18.239771  106017 client.go:171] duration metric: took 28.270144196s to LocalClient.Create
	I1211 23:59:18.239864  106017 start.go:167] duration metric: took 28.27029823s to libmachine.API.Create "ha-565823"
	I1211 23:59:18.239885  106017 start.go:293] postStartSetup for "ha-565823" (driver="kvm2")
	I1211 23:59:18.239895  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:59:18.239917  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.240179  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:59:18.240211  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.242164  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242466  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.242493  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242645  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.242832  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.242993  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.243119  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.330660  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:59:18.335424  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1211 23:59:18.335447  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1211 23:59:18.335503  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1211 23:59:18.335574  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1211 23:59:18.335584  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1211 23:59:18.335717  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 23:59:18.346001  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:18.374524  106017 start.go:296] duration metric: took 134.623519ms for postStartSetup
	I1211 23:59:18.374583  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:18.375295  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.377900  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378234  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.378262  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378516  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:18.378710  106017 start.go:128] duration metric: took 28.427447509s to createHost
	I1211 23:59:18.378738  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.380862  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381196  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.381220  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381358  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.381537  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381691  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381809  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.381919  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:18.382120  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:18.382133  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:59:18.492450  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961558.472734336
	
	I1211 23:59:18.492473  106017 fix.go:216] guest clock: 1733961558.472734336
	I1211 23:59:18.492480  106017 fix.go:229] Guest: 2024-12-11 23:59:18.472734336 +0000 UTC Remote: 2024-12-11 23:59:18.378724497 +0000 UTC m=+28.540551547 (delta=94.009839ms)
	I1211 23:59:18.492521  106017 fix.go:200] guest clock delta is within tolerance: 94.009839ms
	I1211 23:59:18.492529  106017 start.go:83] releasing machines lock for "ha-565823", held for 28.541373742s
	I1211 23:59:18.492553  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.492820  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.495388  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495716  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.495743  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495888  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496371  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496534  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496615  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:59:18.496654  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.496714  106017 ssh_runner.go:195] Run: cat /version.json
	I1211 23:59:18.496740  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.499135  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499486  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499548  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499569  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499675  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.499845  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.499921  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499961  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499985  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500123  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.500135  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.500278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.500460  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500604  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.607330  106017 ssh_runner.go:195] Run: systemctl --version
	I1211 23:59:18.613387  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:59:18.776622  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:59:18.783443  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:59:18.783538  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:59:18.799688  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:59:18.799713  106017 start.go:495] detecting cgroup driver to use...
	I1211 23:59:18.799774  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:59:18.816025  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:59:18.830854  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:59:18.830908  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:59:18.845980  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:59:18.860893  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:59:18.978441  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:59:19.134043  106017 docker.go:233] disabling docker service ...
	I1211 23:59:19.134112  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:59:19.149156  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:59:19.162275  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:59:19.283529  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:59:19.409189  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:59:19.423558  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:59:19.442528  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:59:19.442599  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.453566  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:59:19.453654  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.464397  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.475199  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.486049  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:59:19.497021  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.507803  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.524919  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.535844  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:59:19.545546  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:59:19.545598  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:59:19.559407  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:59:19.569383  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:19.689090  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:59:19.791744  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:59:19.791811  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:59:19.796877  106017 start.go:563] Will wait 60s for crictl version
	I1211 23:59:19.796945  106017 ssh_runner.go:195] Run: which crictl
	I1211 23:59:19.801083  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:59:19.845670  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:59:19.845758  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.875253  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.904311  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1211 23:59:19.906690  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:19.909356  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.909726  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:19.909755  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.910412  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:59:19.915735  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:19.929145  106017 kubeadm.go:883] updating cluster {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:59:19.929263  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:19.929323  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:19.962567  106017 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1211 23:59:19.962636  106017 ssh_runner.go:195] Run: which lz4
	I1211 23:59:19.966688  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1211 23:59:19.966797  106017 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:59:19.970897  106017 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:59:19.970929  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1211 23:59:21.360986  106017 crio.go:462] duration metric: took 1.394221262s to copy over tarball
	I1211 23:59:21.361088  106017 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:59:23.449972  106017 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088850329s)
	I1211 23:59:23.450033  106017 crio.go:469] duration metric: took 2.08900198s to extract the tarball
	I1211 23:59:23.450045  106017 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:59:23.487452  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:23.534823  106017 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:59:23.534855  106017 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:59:23.534866  106017 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.2 crio true true} ...
	I1211 23:59:23.535012  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:59:23.535085  106017 ssh_runner.go:195] Run: crio config
	I1211 23:59:23.584878  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:23.584896  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:23.584905  106017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:59:23.584925  106017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565823 NodeName:ha-565823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:59:23.585039  106017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:59:23.585064  106017 kube-vip.go:115] generating kube-vip config ...
	I1211 23:59:23.585112  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1211 23:59:23.603981  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1211 23:59:23.604115  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1211 23:59:23.604182  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:59:23.614397  106017 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:59:23.614477  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1211 23:59:23.624289  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1211 23:59:23.641517  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:59:23.658716  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1211 23:59:23.675660  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1211 23:59:23.692530  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1211 23:59:23.696599  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:23.709445  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:23.845220  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:59:23.862954  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.19
	I1211 23:59:23.862981  106017 certs.go:194] generating shared ca certs ...
	I1211 23:59:23.863000  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:23.863207  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1211 23:59:23.863251  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1211 23:59:23.863262  106017 certs.go:256] generating profile certs ...
	I1211 23:59:23.863328  106017 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1211 23:59:23.863357  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt with IP's: []
	I1211 23:59:24.110700  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt ...
	I1211 23:59:24.110730  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt: {Name:mk50d526eb9350fec1f3c58be1ef98b2039770b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.110932  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key ...
	I1211 23:59:24.110948  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key: {Name:mk947a896656d347feed0e5ddd7c2c37edce03fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.111050  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c
	I1211 23:59:24.111082  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I1211 23:59:24.333387  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c ...
	I1211 23:59:24.333420  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c: {Name:mkfc61798e61cb1d7ac0b35769a3179525ca368b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333599  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c ...
	I1211 23:59:24.333627  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c: {Name:mk4a04314c10f352160875e4af47370a91a0db88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333740  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1211 23:59:24.333840  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1211 23:59:24.333924  106017 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1211 23:59:24.333944  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt with IP's: []
	I1211 23:59:24.464961  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt ...
	I1211 23:59:24.464993  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt: {Name:mkbb1cf3b9047082cee6fcd6adaa9509e1729b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465183  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key ...
	I1211 23:59:24.465203  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key: {Name:mkc9ec571078b7167489918f5cf8f1ea61967aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465319  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 23:59:24.465348  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 23:59:24.465364  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 23:59:24.465387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 23:59:24.465405  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 23:59:24.465422  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 23:59:24.465435  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 23:59:24.465452  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 23:59:24.465528  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1211 23:59:24.465577  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1211 23:59:24.465592  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:59:24.465634  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1211 23:59:24.465664  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:59:24.465695  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1211 23:59:24.465752  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:24.465790  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.465812  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.465831  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.466545  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:59:24.494141  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:59:24.519556  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:59:24.544702  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:59:24.569766  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 23:59:24.595380  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:59:24.621226  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:59:24.649860  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 23:59:24.698075  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1211 23:59:24.728714  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:59:24.753139  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1211 23:59:24.777957  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:59:24.796289  106017 ssh_runner.go:195] Run: openssl version
	I1211 23:59:24.802883  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1211 23:59:24.816553  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821741  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821804  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.828574  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 23:59:24.840713  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:59:24.853013  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858281  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858331  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.864829  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:59:24.875963  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1211 23:59:24.886500  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891673  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891726  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.898344  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1211 23:59:24.910633  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:59:24.915220  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:59:24.915279  106017 kubeadm.go:392] StartCluster: {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:59:24.915383  106017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:59:24.915454  106017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:59:24.954743  106017 cri.go:89] found id: ""
	I1211 23:59:24.954813  106017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:59:24.965887  106017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:59:24.975963  106017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:59:24.985759  106017 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:59:24.985784  106017 kubeadm.go:157] found existing configuration files:
	
	I1211 23:59:24.985837  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:59:24.995322  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:59:24.995387  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:59:25.005782  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:59:25.015121  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:59:25.015216  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:59:25.024739  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.033898  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:59:25.033949  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.043527  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:59:25.052795  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:59:25.052860  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:59:25.063719  106017 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:59:25.172138  106017 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:59:25.172231  106017 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:59:25.282095  106017 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:59:25.282220  106017 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:59:25.282346  106017 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:59:25.292987  106017 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:59:25.507248  106017 out.go:235]   - Generating certificates and keys ...
	I1211 23:59:25.507374  106017 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:59:25.507500  106017 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:59:25.628233  106017 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:59:25.895094  106017 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:59:26.195266  106017 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:59:26.355531  106017 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:59:26.415298  106017 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:59:26.415433  106017 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.603280  106017 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:59:26.603516  106017 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.737544  106017 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:59:26.938736  106017 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:59:27.118447  106017 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:59:27.118579  106017 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:59:27.214058  106017 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:59:27.283360  106017 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:59:27.437118  106017 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:59:27.583693  106017 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:59:27.738001  106017 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:59:27.738673  106017 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:59:27.741933  106017 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:59:27.743702  106017 out.go:235]   - Booting up control plane ...
	I1211 23:59:27.743844  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:59:27.744424  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:59:27.746935  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:59:27.765392  106017 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:59:27.772566  106017 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:59:27.772699  106017 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:59:27.925671  106017 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:59:27.925813  106017 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:59:28.450340  106017 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 524.075614ms
	I1211 23:59:28.450451  106017 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:59:34.524805  106017 kubeadm.go:310] [api-check] The API server is healthy after 6.076898322s
	I1211 23:59:34.537381  106017 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:59:34.553285  106017 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:59:35.079814  106017 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:59:35.080057  106017 kubeadm.go:310] [mark-control-plane] Marking the node ha-565823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:59:35.095582  106017 kubeadm.go:310] [bootstrap-token] Using token: lktsit.hvyjnx8elfe20z7f
	I1211 23:59:35.097027  106017 out.go:235]   - Configuring RBAC rules ...
	I1211 23:59:35.097177  106017 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:59:35.101780  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:59:35.113593  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:59:35.118164  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:59:35.121511  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:59:35.125148  106017 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:59:35.144131  106017 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:59:35.407109  106017 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:59:35.930699  106017 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:59:35.931710  106017 kubeadm.go:310] 
	I1211 23:59:35.931771  106017 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:59:35.931775  106017 kubeadm.go:310] 
	I1211 23:59:35.931851  106017 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:59:35.931859  106017 kubeadm.go:310] 
	I1211 23:59:35.931880  106017 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:59:35.931927  106017 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:59:35.931982  106017 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:59:35.932000  106017 kubeadm.go:310] 
	I1211 23:59:35.932049  106017 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:59:35.932058  106017 kubeadm.go:310] 
	I1211 23:59:35.932118  106017 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:59:35.932126  106017 kubeadm.go:310] 
	I1211 23:59:35.932168  106017 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:59:35.932259  106017 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:59:35.932333  106017 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:59:35.932350  106017 kubeadm.go:310] 
	I1211 23:59:35.932432  106017 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:59:35.932499  106017 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:59:35.932506  106017 kubeadm.go:310] 
	I1211 23:59:35.932579  106017 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.932666  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1211 23:59:35.932687  106017 kubeadm.go:310] 	--control-plane 
	I1211 23:59:35.932692  106017 kubeadm.go:310] 
	I1211 23:59:35.932780  106017 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:59:35.932793  106017 kubeadm.go:310] 
	I1211 23:59:35.932900  106017 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.933031  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1211 23:59:35.933914  106017 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:59:35.934034  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:35.934056  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:35.936050  106017 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1211 23:59:35.937506  106017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:59:35.943577  106017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1211 23:59:35.943610  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:59:35.964609  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:59:36.354699  106017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:59:36.354799  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:36.354832  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823 minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=true
	I1211 23:59:36.386725  106017 ops.go:34] apiserver oom_adj: -16
	I1211 23:59:36.511318  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.011972  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.511719  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.012059  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.511637  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.012451  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.512222  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.012218  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.512204  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.605442  106017 kubeadm.go:1113] duration metric: took 4.250718988s to wait for elevateKubeSystemPrivileges
	I1211 23:59:40.605479  106017 kubeadm.go:394] duration metric: took 15.690206878s to StartCluster
	I1211 23:59:40.605505  106017 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.605593  106017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.606578  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.606860  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:59:40.606860  106017 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:40.606883  106017 start.go:241] waiting for startup goroutines ...
	I1211 23:59:40.606899  106017 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 23:59:40.606982  106017 addons.go:69] Setting storage-provisioner=true in profile "ha-565823"
	I1211 23:59:40.606989  106017 addons.go:69] Setting default-storageclass=true in profile "ha-565823"
	I1211 23:59:40.607004  106017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565823"
	I1211 23:59:40.607018  106017 addons.go:234] Setting addon storage-provisioner=true in "ha-565823"
	I1211 23:59:40.607045  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.607426  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607469  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.607635  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:40.607793  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607838  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.622728  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1211 23:59:40.622807  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1211 23:59:40.623266  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623370  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623966  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.623993  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624004  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.624015  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624390  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624398  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624567  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.624920  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.624961  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.626695  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.627009  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 23:59:40.627499  106017 cert_rotation.go:140] Starting client certificate rotation controller
	I1211 23:59:40.627813  106017 addons.go:234] Setting addon default-storageclass=true in "ha-565823"
	I1211 23:59:40.627859  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.628133  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.628177  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.640869  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I1211 23:59:40.641437  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.642016  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.642043  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.642434  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.642635  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.643106  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I1211 23:59:40.643674  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.644240  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.644275  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.644588  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.644640  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.645087  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.645136  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.646489  106017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:59:40.647996  106017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.648015  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:59:40.648030  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.651165  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651679  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.651703  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651939  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.652136  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.652353  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.652515  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.661089  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I1211 23:59:40.661521  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.661949  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.661970  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.662302  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.662464  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.664023  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.664204  106017 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:40.664219  106017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:59:40.664234  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.666799  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667194  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.667218  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667366  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.667518  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.667676  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.667787  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.766556  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:59:40.838934  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.853931  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:41.384410  106017 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:59:41.687789  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.687839  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688024  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688044  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688143  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688158  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688166  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688175  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688183  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688295  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688316  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688337  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688398  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688424  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688407  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688511  106017 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 23:59:41.688531  106017 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 23:59:41.688635  106017 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1211 23:59:41.688642  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.688654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.688660  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.689067  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.689084  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.689112  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.703120  106017 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1211 23:59:41.703858  106017 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1211 23:59:41.703876  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.703888  106017 round_trippers.go:473]     Content-Type: application/json
	I1211 23:59:41.703896  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.703902  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.707451  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1211 23:59:41.707880  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.707905  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.708200  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.708289  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.708309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.710098  106017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1211 23:59:41.711624  106017 addons.go:510] duration metric: took 1.104728302s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1211 23:59:41.711657  106017 start.go:246] waiting for cluster config update ...
	I1211 23:59:41.711669  106017 start.go:255] writing updated cluster config ...
	I1211 23:59:41.713334  106017 out.go:201] 
	I1211 23:59:41.714788  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:41.714856  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.716555  106017 out.go:177] * Starting "ha-565823-m02" control-plane node in "ha-565823" cluster
	I1211 23:59:41.717794  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:41.717815  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:59:41.717923  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:59:41.717935  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:59:41.717999  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.718156  106017 start.go:360] acquireMachinesLock for ha-565823-m02: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:59:41.718199  106017 start.go:364] duration metric: took 25.794µs to acquireMachinesLock for "ha-565823-m02"
	I1211 23:59:41.718224  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:41.718291  106017 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1211 23:59:41.719692  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:59:41.719777  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:41.719812  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:41.734465  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1211 23:59:41.734950  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:41.735455  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:41.735478  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:41.735843  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:41.736006  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1211 23:59:41.736149  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1211 23:59:41.736349  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:59:41.736395  106017 client.go:168] LocalClient.Create starting
	I1211 23:59:41.736425  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:59:41.736455  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736469  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736519  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:59:41.736537  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736547  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736559  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:59:41.736567  106017 main.go:141] libmachine: (ha-565823-m02) Calling .PreCreateCheck
	I1211 23:59:41.736735  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1211 23:59:41.737076  106017 main.go:141] libmachine: Creating machine...
	I1211 23:59:41.737091  106017 main.go:141] libmachine: (ha-565823-m02) Calling .Create
	I1211 23:59:41.737203  106017 main.go:141] libmachine: (ha-565823-m02) Creating KVM machine...
	I1211 23:59:41.738412  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing default KVM network
	I1211 23:59:41.738502  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing private KVM network mk-ha-565823
	I1211 23:59:41.738691  106017 main.go:141] libmachine: (ha-565823-m02) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:41.738735  106017 main.go:141] libmachine: (ha-565823-m02) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:59:41.738778  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:41.738685  106399 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:41.738888  106017 main.go:141] libmachine: (ha-565823-m02) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:59:42.010827  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.010671  106399 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa...
	I1211 23:59:42.081269  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081125  106399 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk...
	I1211 23:59:42.081297  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing magic tar header
	I1211 23:59:42.081315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing SSH key tar header
	I1211 23:59:42.081327  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081241  106399 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:42.081337  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02
	I1211 23:59:42.081349  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:59:42.081395  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 (perms=drwx------)
	I1211 23:59:42.081428  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:59:42.081445  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:42.081465  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:59:42.081477  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:59:42.081489  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:59:42.081497  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home
	I1211 23:59:42.081510  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:59:42.081524  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:59:42.081536  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Skipping /home - not owner
	I1211 23:59:42.081553  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:59:42.081564  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:59:42.081577  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:42.082570  106017 main.go:141] libmachine: (ha-565823-m02) define libvirt domain using xml: 
	I1211 23:59:42.082593  106017 main.go:141] libmachine: (ha-565823-m02) <domain type='kvm'>
	I1211 23:59:42.082600  106017 main.go:141] libmachine: (ha-565823-m02)   <name>ha-565823-m02</name>
	I1211 23:59:42.082605  106017 main.go:141] libmachine: (ha-565823-m02)   <memory unit='MiB'>2200</memory>
	I1211 23:59:42.082610  106017 main.go:141] libmachine: (ha-565823-m02)   <vcpu>2</vcpu>
	I1211 23:59:42.082618  106017 main.go:141] libmachine: (ha-565823-m02)   <features>
	I1211 23:59:42.082626  106017 main.go:141] libmachine: (ha-565823-m02)     <acpi/>
	I1211 23:59:42.082641  106017 main.go:141] libmachine: (ha-565823-m02)     <apic/>
	I1211 23:59:42.082671  106017 main.go:141] libmachine: (ha-565823-m02)     <pae/>
	I1211 23:59:42.082693  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.082705  106017 main.go:141] libmachine: (ha-565823-m02)   </features>
	I1211 23:59:42.082719  106017 main.go:141] libmachine: (ha-565823-m02)   <cpu mode='host-passthrough'>
	I1211 23:59:42.082728  106017 main.go:141] libmachine: (ha-565823-m02)   
	I1211 23:59:42.082736  106017 main.go:141] libmachine: (ha-565823-m02)   </cpu>
	I1211 23:59:42.082744  106017 main.go:141] libmachine: (ha-565823-m02)   <os>
	I1211 23:59:42.082754  106017 main.go:141] libmachine: (ha-565823-m02)     <type>hvm</type>
	I1211 23:59:42.082761  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='cdrom'/>
	I1211 23:59:42.082771  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='hd'/>
	I1211 23:59:42.082779  106017 main.go:141] libmachine: (ha-565823-m02)     <bootmenu enable='no'/>
	I1211 23:59:42.082792  106017 main.go:141] libmachine: (ha-565823-m02)   </os>
	I1211 23:59:42.082803  106017 main.go:141] libmachine: (ha-565823-m02)   <devices>
	I1211 23:59:42.082811  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='cdrom'>
	I1211 23:59:42.082828  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/boot2docker.iso'/>
	I1211 23:59:42.082836  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hdc' bus='scsi'/>
	I1211 23:59:42.082847  106017 main.go:141] libmachine: (ha-565823-m02)       <readonly/>
	I1211 23:59:42.082857  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082887  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='disk'>
	I1211 23:59:42.082908  106017 main.go:141] libmachine: (ha-565823-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:59:42.082928  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk'/>
	I1211 23:59:42.082944  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hda' bus='virtio'/>
	I1211 23:59:42.082957  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082968  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.082978  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='mk-ha-565823'/>
	I1211 23:59:42.082985  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.082990  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.082997  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.083003  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='default'/>
	I1211 23:59:42.083012  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.083025  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.083038  106017 main.go:141] libmachine: (ha-565823-m02)     <serial type='pty'>
	I1211 23:59:42.083047  106017 main.go:141] libmachine: (ha-565823-m02)       <target port='0'/>
	I1211 23:59:42.083054  106017 main.go:141] libmachine: (ha-565823-m02)     </serial>
	I1211 23:59:42.083065  106017 main.go:141] libmachine: (ha-565823-m02)     <console type='pty'>
	I1211 23:59:42.083077  106017 main.go:141] libmachine: (ha-565823-m02)       <target type='serial' port='0'/>
	I1211 23:59:42.083089  106017 main.go:141] libmachine: (ha-565823-m02)     </console>
	I1211 23:59:42.083098  106017 main.go:141] libmachine: (ha-565823-m02)     <rng model='virtio'>
	I1211 23:59:42.083112  106017 main.go:141] libmachine: (ha-565823-m02)       <backend model='random'>/dev/random</backend>
	I1211 23:59:42.083126  106017 main.go:141] libmachine: (ha-565823-m02)     </rng>
	I1211 23:59:42.083154  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083172  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083184  106017 main.go:141] libmachine: (ha-565823-m02)   </devices>
	I1211 23:59:42.083193  106017 main.go:141] libmachine: (ha-565823-m02) </domain>
	I1211 23:59:42.083206  106017 main.go:141] libmachine: (ha-565823-m02) 
	I1211 23:59:42.090031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:4e:60:e6 in network default
	I1211 23:59:42.090722  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring networks are active...
	I1211 23:59:42.090744  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:42.091386  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network default is active
	I1211 23:59:42.091728  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network mk-ha-565823 is active
	I1211 23:59:42.092172  106017 main.go:141] libmachine: (ha-565823-m02) Getting domain xml...
	I1211 23:59:42.092821  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:43.306722  106017 main.go:141] libmachine: (ha-565823-m02) Waiting to get IP...
	I1211 23:59:43.307541  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.307970  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.308021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.307943  106399 retry.go:31] will retry after 188.292611ms: waiting for machine to come up
	I1211 23:59:43.498538  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.498980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.499007  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.498936  106399 retry.go:31] will retry after 383.283577ms: waiting for machine to come up
	I1211 23:59:43.883676  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.884158  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.884186  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.884123  106399 retry.go:31] will retry after 368.673726ms: waiting for machine to come up
	I1211 23:59:44.254720  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.255182  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.255205  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.255142  106399 retry.go:31] will retry after 403.445822ms: waiting for machine to come up
	I1211 23:59:44.660664  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.661153  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.661178  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.661074  106399 retry.go:31] will retry after 718.942978ms: waiting for machine to come up
	I1211 23:59:45.382183  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:45.382736  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:45.382761  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:45.382694  106399 retry.go:31] will retry after 941.806671ms: waiting for machine to come up
	I1211 23:59:46.326070  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:46.326533  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:46.326566  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:46.326481  106399 retry.go:31] will retry after 1.01864437s: waiting for machine to come up
	I1211 23:59:47.347315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:47.347790  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:47.347812  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:47.347737  106399 retry.go:31] will retry after 1.213138s: waiting for machine to come up
	I1211 23:59:48.562238  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:48.562705  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:48.562737  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:48.562658  106399 retry.go:31] will retry after 1.846591325s: waiting for machine to come up
	I1211 23:59:50.410650  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:50.411116  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:50.411143  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:50.411072  106399 retry.go:31] will retry after 2.02434837s: waiting for machine to come up
	I1211 23:59:52.436763  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:52.437247  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:52.437276  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:52.437194  106399 retry.go:31] will retry after 1.785823174s: waiting for machine to come up
	I1211 23:59:54.224640  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:54.224948  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:54.224975  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:54.224901  106399 retry.go:31] will retry after 2.203569579s: waiting for machine to come up
	I1211 23:59:56.431378  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:56.431904  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:56.431933  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:56.431858  106399 retry.go:31] will retry after 3.94903919s: waiting for machine to come up
	I1212 00:00:00.384703  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:00.385175  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1212 00:00:00.385208  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1212 00:00:00.385121  106399 retry.go:31] will retry after 3.809627495s: waiting for machine to come up
	I1212 00:00:04.197607  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198181  106017 main.go:141] libmachine: (ha-565823-m02) Found IP for machine: 192.168.39.103
	I1212 00:00:04.198204  106017 main.go:141] libmachine: (ha-565823-m02) Reserving static IP address...
	I1212 00:00:04.198220  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has current primary IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198616  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find host DHCP lease matching {name: "ha-565823-m02", mac: "52:54:00:cc:31:80", ip: "192.168.39.103"} in network mk-ha-565823
	I1212 00:00:04.273114  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Getting to WaitForSSH function...
	I1212 00:00:04.273143  106017 main.go:141] libmachine: (ha-565823-m02) Reserved static IP address: 192.168.39.103
	I1212 00:00:04.273155  106017 main.go:141] libmachine: (ha-565823-m02) Waiting for SSH to be available...
	I1212 00:00:04.275998  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276409  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.276438  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276561  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH client type: external
	I1212 00:00:04.276592  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa (-rw-------)
	I1212 00:00:04.276623  106017 main.go:141] libmachine: (ha-565823-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:00:04.276639  106017 main.go:141] libmachine: (ha-565823-m02) DBG | About to run SSH command:
	I1212 00:00:04.276655  106017 main.go:141] libmachine: (ha-565823-m02) DBG | exit 0
	I1212 00:00:04.400102  106017 main.go:141] libmachine: (ha-565823-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 00:00:04.400348  106017 main.go:141] libmachine: (ha-565823-m02) KVM machine creation complete!
	I1212 00:00:04.400912  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:04.401484  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401664  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401821  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:00:04.401837  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetState
	I1212 00:00:04.403174  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:00:04.403192  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:00:04.403199  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:00:04.403208  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.405388  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405786  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.405820  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405928  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.406109  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406313  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406472  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.406636  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.406846  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.406860  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:00:04.507379  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.507409  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:00:04.507426  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.510219  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510595  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.510633  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510776  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.511014  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511172  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511323  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.511507  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.511752  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.511765  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:00:04.612413  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:00:04.612516  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:00:04.612530  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:00:04.612538  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.612840  106017 buildroot.go:166] provisioning hostname "ha-565823-m02"
	I1212 00:00:04.612874  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.613079  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.615872  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616272  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.616326  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616447  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.616621  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616780  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616976  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.617134  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.617294  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.617306  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m02 && echo "ha-565823-m02" | sudo tee /etc/hostname
	I1212 00:00:04.736911  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m02
	
	I1212 00:00:04.736949  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.739899  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740287  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.740321  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740530  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.740723  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.740885  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.741022  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.741259  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.741462  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.741481  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:00:04.854133  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.854171  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:00:04.854189  106017 buildroot.go:174] setting up certificates
	I1212 00:00:04.854199  106017 provision.go:84] configureAuth start
	I1212 00:00:04.854213  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.854617  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:04.858031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858466  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.858492  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858772  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.860980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.861344  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861482  106017 provision.go:143] copyHostCerts
	I1212 00:00:04.861512  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861546  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:00:04.861556  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861621  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:00:04.861699  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861718  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:00:04.861725  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861748  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:00:04.861792  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861809  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:00:04.861815  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861836  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:00:04.861892  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m02 san=[127.0.0.1 192.168.39.103 ha-565823-m02 localhost minikube]
	I1212 00:00:05.017387  106017 provision.go:177] copyRemoteCerts
	I1212 00:00:05.017447  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:00:05.017475  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.020320  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020751  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.020781  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020994  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.021285  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.021461  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.021631  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.103134  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:00:05.103225  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:00:05.128318  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:00:05.128392  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:00:05.152814  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:00:05.152893  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:00:05.177479  106017 provision.go:87] duration metric: took 323.264224ms to configureAuth
	I1212 00:00:05.177509  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:00:05.177674  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:05.177748  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.180791  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181249  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.181280  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181463  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.181702  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.181870  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.182010  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.182176  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.182341  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.182357  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:00:05.417043  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:00:05.417067  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:00:05.417075  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetURL
	I1212 00:00:05.418334  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using libvirt version 6000000
	I1212 00:00:05.420596  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.420905  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.420938  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.421114  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:00:05.421129  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:00:05.421139  106017 client.go:171] duration metric: took 23.684732891s to LocalClient.Create
	I1212 00:00:05.421170  106017 start.go:167] duration metric: took 23.684823561s to libmachine.API.Create "ha-565823"
	I1212 00:00:05.421183  106017 start.go:293] postStartSetup for "ha-565823-m02" (driver="kvm2")
	I1212 00:00:05.421197  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:00:05.421214  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.421468  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:00:05.421495  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.424694  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425050  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.425083  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425238  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.425449  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.425599  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.425739  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.506562  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:00:05.511891  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:00:05.511921  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:00:05.512000  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:00:05.512114  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:00:05.512128  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:00:05.512236  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:00:05.525426  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:05.552318  106017 start.go:296] duration metric: took 131.1154ms for postStartSetup
	I1212 00:00:05.552386  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:05.553038  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.556173  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556661  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.556704  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556972  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:05.557179  106017 start.go:128] duration metric: took 23.838875142s to createHost
	I1212 00:00:05.557206  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.559644  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560000  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.560021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560242  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.560469  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560659  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560833  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.561033  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.561234  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.561248  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:00:05.664479  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961605.636878321
	
	I1212 00:00:05.664504  106017 fix.go:216] guest clock: 1733961605.636878321
	I1212 00:00:05.664511  106017 fix.go:229] Guest: 2024-12-12 00:00:05.636878321 +0000 UTC Remote: 2024-12-12 00:00:05.557193497 +0000 UTC m=+75.719020541 (delta=79.684824ms)
	I1212 00:00:05.664529  106017 fix.go:200] guest clock delta is within tolerance: 79.684824ms
	I1212 00:00:05.664536  106017 start.go:83] releasing machines lock for "ha-565823-m02", held for 23.946326821s
	I1212 00:00:05.664559  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.664834  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.667309  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.667587  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.667625  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.670169  106017 out.go:177] * Found network options:
	I1212 00:00:05.671775  106017 out.go:177]   - NO_PROXY=192.168.39.19
	W1212 00:00:05.673420  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.673451  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.673974  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674184  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674310  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:00:05.674362  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	W1212 00:00:05.674404  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.674488  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:00:05.674510  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.677209  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677558  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.677588  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677632  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677782  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.677967  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678067  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.678094  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.678133  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678286  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.678288  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.678440  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678560  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678668  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.906824  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:00:05.913945  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:00:05.914026  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:00:05.931775  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:00:05.931797  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:00:05.931857  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:00:05.948556  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:00:05.963326  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:00:05.963397  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:00:05.978208  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:00:05.992483  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:00:06.103988  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:00:06.275509  106017 docker.go:233] disabling docker service ...
	I1212 00:00:06.275580  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:00:06.293042  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:00:06.306048  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:00:06.431702  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:00:06.557913  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:00:06.573066  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:00:06.592463  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:00:06.592536  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.604024  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:00:06.604087  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.615267  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.626194  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.637083  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:00:06.648061  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.659477  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.677134  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.687875  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:00:06.701376  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:00:06.701451  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:00:06.714621  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:00:06.724651  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:06.844738  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:00:06.941123  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:00:06.941186  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:00:06.946025  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:00:06.946103  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:00:06.950454  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:00:06.989220  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:00:06.989302  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.018407  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.049375  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:00:07.051430  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:00:07.052588  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:07.055087  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055359  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:07.055377  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055577  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:00:07.059718  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:07.072121  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:00:07.072328  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:07.072649  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.072692  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.087345  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I1212 00:00:07.087790  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.088265  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.088285  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.088623  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.088818  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:00:07.090394  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:07.090786  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.090832  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.107441  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I1212 00:00:07.107836  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.108308  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.108327  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.108632  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.108786  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:07.108915  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.103
	I1212 00:00:07.108926  106017 certs.go:194] generating shared ca certs ...
	I1212 00:00:07.108939  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.109062  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:00:07.109105  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:00:07.109114  106017 certs.go:256] generating profile certs ...
	I1212 00:00:07.109178  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:00:07.109202  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc
	I1212 00:00:07.109217  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.254]
	I1212 00:00:07.203114  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc ...
	I1212 00:00:07.203150  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc: {Name:mk3a75c055b0a829a056d90903c78ae5decf9bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203349  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc ...
	I1212 00:00:07.203372  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc: {Name:mkce850d5486843203391b76609d5fd65c614c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203468  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:00:07.203647  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:00:07.203815  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:00:07.203836  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:00:07.203855  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:00:07.203870  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:00:07.203891  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:00:07.203909  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:00:07.203931  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:00:07.203949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:00:07.203968  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:00:07.204035  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:00:07.204078  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:00:07.204113  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:00:07.204170  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:00:07.204217  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:00:07.204255  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:00:07.204310  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:07.204351  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.204383  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.204402  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.204445  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:07.207043  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207413  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:07.207439  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207647  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:07.207863  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:07.208027  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:07.208177  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:07.288012  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:00:07.293204  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:00:07.304789  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:00:07.310453  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:00:07.321124  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:00:07.326057  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:00:07.337737  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:00:07.342691  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:00:07.354806  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:00:07.359143  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:00:07.371799  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:00:07.376295  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:00:07.387705  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:00:07.415288  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:00:07.440414  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:00:07.466177  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:00:07.490907  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 00:00:07.517228  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:00:07.542858  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:00:07.567465  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:00:07.592181  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:00:07.616218  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:00:07.641063  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:00:07.665682  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:00:07.683443  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:00:07.700820  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:00:07.718283  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:00:07.735173  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:00:07.752079  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:00:07.770479  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:00:07.789102  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:00:07.795248  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:00:07.806811  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811750  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811816  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.818034  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:00:07.829409  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:00:07.840952  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845782  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845853  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.851849  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:00:07.863158  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:00:07.875091  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880111  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880173  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.886325  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:00:07.897750  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:00:07.902056  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:00:07.902131  106017 kubeadm.go:934] updating node {m02 192.168.39.103 8443 v1.31.2 crio true true} ...
	I1212 00:00:07.902244  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:00:07.902279  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:00:07.902323  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:00:07.920010  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:00:07.920099  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:00:07.920166  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.930159  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:00:07.930221  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.939751  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:00:07.939776  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939831  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939835  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1212 00:00:07.939861  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1212 00:00:07.944054  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:00:07.944086  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:00:09.149265  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:09.168056  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.168181  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.173566  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:00:09.173601  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:00:09.219150  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.219238  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.234545  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:00:09.234589  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:00:09.726465  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:00:09.736811  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1212 00:00:09.753799  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:00:09.771455  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:00:09.789916  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:00:09.794008  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:09.807290  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:09.944370  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:09.973225  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:09.973893  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:09.973959  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:09.989196  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I1212 00:00:09.989723  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:09.990363  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:09.990386  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:09.990735  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:09.990931  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:09.991104  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:00:09.991225  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:00:09.991249  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:09.994437  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995018  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:09.995065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995202  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:09.995448  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:09.995585  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:09.995765  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:10.156968  106017 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:10.157029  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443"
	I1212 00:00:31.347275  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443": (21.190211224s)
	I1212 00:00:31.347321  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:00:31.826934  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m02 minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:00:32.001431  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:00:32.141631  106017 start.go:319] duration metric: took 22.150523355s to joinCluster
	I1212 00:00:32.141725  106017 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:32.141997  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:32.143552  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:00:32.145227  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:32.332043  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:32.348508  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:00:32.348864  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:00:32.348951  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:00:32.349295  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:32.349423  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.349436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.349449  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.349460  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.362203  106017 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 00:00:32.850412  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.850436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.850447  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.850455  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.854786  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.349683  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.349718  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.354356  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.849742  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.849766  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.849774  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.849778  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.854313  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.350516  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.350539  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.350547  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.350551  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.355023  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.355775  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:34.850173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.850197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.850206  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.850210  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.853276  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.350529  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.350560  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.350568  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.350574  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.354219  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.850352  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.850378  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.850386  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.850391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.853507  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.349531  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.349555  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.349566  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.349572  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.353110  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.849604  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.849629  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.849640  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.849645  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.856046  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:36.856697  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:37.349961  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.349980  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.349989  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.349993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.354377  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:37.849622  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.849647  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.849660  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.849665  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.853494  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:38.349611  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.349641  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.349654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.349686  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.354211  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:38.850399  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.850424  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.850434  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.850440  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.854312  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.350249  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.350275  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.350288  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.350293  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.354293  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.355152  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:39.849553  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.849578  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.849587  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.849592  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.854321  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:40.350406  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.350438  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.350450  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.350456  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.354039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:40.850576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.850604  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.850615  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.850620  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.854393  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.349882  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.349908  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.349919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.349925  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.353612  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.849701  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.849723  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.849732  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.849737  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.852781  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.853447  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:42.349592  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.349615  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.349624  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.349629  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.352747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:42.849858  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.849881  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.849889  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.849894  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.853198  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.350237  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.350265  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.350274  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.350278  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.353850  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.850187  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.850215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.850227  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.850232  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.853783  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.854292  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:44.349681  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.349719  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.353562  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:44.849731  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.849764  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.849775  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.849783  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.853689  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.349741  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.349768  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.349777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.349781  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.353601  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.849492  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.849515  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.849524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.849528  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.853061  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:46.349543  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.349573  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.349584  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.349589  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.352599  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:46.353168  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:46.850149  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.850169  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.850177  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.850182  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.854205  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:47.350169  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.350191  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.350200  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.350206  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.353664  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:47.849752  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.849780  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.849793  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.849798  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.853354  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.350356  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.350379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.350387  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.350391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.353938  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.354537  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:48.849794  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.849820  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.849829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.849834  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.853163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.350186  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.350215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.350224  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.350229  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.353713  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.849652  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.849676  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.849684  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.849687  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.853033  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.350113  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.350142  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.350153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.350159  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.353742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.849593  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.849613  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.849621  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.849624  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.852952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.853510  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:51.349926  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.349948  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.349957  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.349963  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.353301  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:51.849615  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.849638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.849646  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.849655  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.853844  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.350547  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.350572  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.350580  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.350584  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.354248  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.850223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.850252  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.850263  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.850268  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.853470  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.854190  106017 node_ready.go:49] node "ha-565823-m02" has status "Ready":"True"
	I1212 00:00:52.854220  106017 node_ready.go:38] duration metric: took 20.504892955s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:52.854231  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:52.854318  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:52.854327  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.854334  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.854339  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.859106  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.865543  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.865630  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:00:52.865638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.865646  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.865651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.868523  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.869398  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.869413  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.869424  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.869431  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.871831  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.872543  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.872562  106017 pod_ready.go:82] duration metric: took 6.990987ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872571  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872619  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:00:52.872627  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.872633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.872639  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.874818  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.875523  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.875541  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.875551  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.875557  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.877466  106017 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:00:52.878112  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.878131  106017 pod_ready.go:82] duration metric: took 5.554087ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878140  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878190  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:00:52.878197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.878204  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.878211  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.880364  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.880870  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.880885  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.880891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.880895  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.883116  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.883560  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.883576  106017 pod_ready.go:82] duration metric: took 5.430598ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883587  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:00:52.883682  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.883691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.883700  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.886455  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.887079  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.887092  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.887099  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.887104  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.889373  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.889794  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.889810  106017 pod_ready.go:82] duration metric: took 6.198051ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.889825  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.051288  106017 request.go:632] Waited for 161.36947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051368  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.051390  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.051401  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.055000  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.251236  106017 request.go:632] Waited for 195.409824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251334  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251344  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.251352  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.251356  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.254773  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.255341  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.255360  106017 pod_ready.go:82] duration metric: took 365.529115ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.255371  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.450696  106017 request.go:632] Waited for 195.24618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450768  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450773  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.450782  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.450788  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.454132  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.650685  106017 request.go:632] Waited for 195.384956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650745  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650751  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.650758  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.650762  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.654400  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.655229  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.655251  106017 pod_ready.go:82] duration metric: took 399.872206ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.655268  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.850267  106017 request.go:632] Waited for 194.898023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850386  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.850398  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.850408  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.853683  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.050714  106017 request.go:632] Waited for 196.358846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050791  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050798  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.050810  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.050821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.056588  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:54.057030  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.057048  106017 pod_ready.go:82] duration metric: took 401.768958ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.057064  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.251122  106017 request.go:632] Waited for 193.98571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251196  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251202  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.251215  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.254477  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.451067  106017 request.go:632] Waited for 195.40262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451162  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451179  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.451188  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.451192  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.455097  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.455639  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.455655  106017 pod_ready.go:82] duration metric: took 398.584366ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.455670  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.650842  106017 request.go:632] Waited for 195.080577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650913  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650919  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.650926  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.650932  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.654798  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.851030  106017 request.go:632] Waited for 195.376895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851100  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851111  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.851123  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.851133  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.854879  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.855493  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.855509  106017 pod_ready.go:82] duration metric: took 399.831743ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.855522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.050825  106017 request.go:632] Waited for 195.216303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050891  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050897  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.050904  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.050910  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.055618  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.250720  106017 request.go:632] Waited for 194.371361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250781  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250786  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.250795  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.250802  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.255100  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.255613  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.255633  106017 pod_ready.go:82] duration metric: took 400.104583ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.255659  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.450909  106017 request.go:632] Waited for 195.147666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450990  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450999  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.451016  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.451026  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.455430  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.650645  106017 request.go:632] Waited for 194.425591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650713  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650719  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.650727  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.650736  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.654680  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:55.655493  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.655512  106017 pod_ready.go:82] duration metric: took 399.840095ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.655522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.850696  106017 request.go:632] Waited for 195.072101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850769  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.850777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.850782  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.855247  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.050354  106017 request.go:632] Waited for 194.294814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050422  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050428  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.050438  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.050441  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.053971  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:56.054426  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:56.054442  106017 pod_ready.go:82] duration metric: took 398.914314ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:56.054455  106017 pod_ready.go:39] duration metric: took 3.200213001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:56.054475  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:00:56.054526  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:00:56.072661  106017 api_server.go:72] duration metric: took 23.930895419s to wait for apiserver process to appear ...
	I1212 00:00:56.072689  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:00:56.072711  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:00:56.077698  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:00:56.077790  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:00:56.077803  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.077813  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.077823  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.078602  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:00:56.078749  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:00:56.078777  106017 api_server.go:131] duration metric: took 6.080516ms to wait for apiserver health ...
	I1212 00:00:56.078787  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:00:56.251224  106017 request.go:632] Waited for 172.358728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251308  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251314  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.251322  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.251328  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.257604  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:56.263097  106017 system_pods.go:59] 17 kube-system pods found
	I1212 00:00:56.263131  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.263138  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.263146  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.263154  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.263159  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.263164  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.263168  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.263173  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.263179  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.263184  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.263191  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.263197  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.263203  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.263211  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.263216  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.263222  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.263228  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.263239  106017 system_pods.go:74] duration metric: took 184.44261ms to wait for pod list to return data ...
	I1212 00:00:56.263253  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:00:56.450737  106017 request.go:632] Waited for 187.395152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450799  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450805  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.450817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.450824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.455806  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.456064  106017 default_sa.go:45] found service account: "default"
	I1212 00:00:56.456083  106017 default_sa.go:55] duration metric: took 192.823176ms for default service account to be created ...
	I1212 00:00:56.456093  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:00:56.650300  106017 request.go:632] Waited for 194.107546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650380  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.650392  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.650403  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.656388  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:56.662029  106017 system_pods.go:86] 17 kube-system pods found
	I1212 00:00:56.662073  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.662082  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.662088  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.662094  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.662100  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.662108  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.662118  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.662124  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.662133  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.662140  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.662148  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.662153  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.662161  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.662165  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.662173  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.662178  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.662187  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.662196  106017 system_pods.go:126] duration metric: took 206.091251ms to wait for k8s-apps to be running ...
	I1212 00:00:56.662210  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:00:56.662262  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:56.679491  106017 system_svc.go:56] duration metric: took 17.268621ms WaitForService to wait for kubelet
	I1212 00:00:56.679526  106017 kubeadm.go:582] duration metric: took 24.537768524s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:00:56.679546  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:00:56.851276  106017 request.go:632] Waited for 171.630771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851341  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851347  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.851354  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.851363  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.856253  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.857605  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857634  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857650  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857655  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857661  106017 node_conditions.go:105] duration metric: took 178.109574ms to run NodePressure ...
	I1212 00:00:56.857683  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:00:56.857713  106017 start.go:255] writing updated cluster config ...
	I1212 00:00:56.859819  106017 out.go:201] 
	I1212 00:00:56.861355  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:56.861459  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.863133  106017 out.go:177] * Starting "ha-565823-m03" control-plane node in "ha-565823" cluster
	I1212 00:00:56.864330  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:00:56.864351  106017 cache.go:56] Caching tarball of preloaded images
	I1212 00:00:56.864443  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:00:56.864454  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:00:56.864537  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.864703  106017 start.go:360] acquireMachinesLock for ha-565823-m03: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:00:56.864743  106017 start.go:364] duration metric: took 22.236µs to acquireMachinesLock for "ha-565823-m03"
	I1212 00:00:56.864764  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:56.864862  106017 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1212 00:00:56.866313  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:00:56.866390  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:56.866430  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:56.881400  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1212 00:00:56.881765  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:56.882247  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:56.882274  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:56.882594  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:56.882778  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:00:56.882918  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:00:56.883084  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1212 00:00:56.883116  106017 client.go:168] LocalClient.Create starting
	I1212 00:00:56.883150  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:00:56.883194  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883215  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883281  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:00:56.883314  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883330  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883354  106017 main.go:141] libmachine: Running pre-create checks...
	I1212 00:00:56.883365  106017 main.go:141] libmachine: (ha-565823-m03) Calling .PreCreateCheck
	I1212 00:00:56.883572  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:00:56.883977  106017 main.go:141] libmachine: Creating machine...
	I1212 00:00:56.883994  106017 main.go:141] libmachine: (ha-565823-m03) Calling .Create
	I1212 00:00:56.884152  106017 main.go:141] libmachine: (ha-565823-m03) Creating KVM machine...
	I1212 00:00:56.885388  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing default KVM network
	I1212 00:00:56.885537  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing private KVM network mk-ha-565823
	I1212 00:00:56.885677  106017 main.go:141] libmachine: (ha-565823-m03) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:56.885696  106017 main.go:141] libmachine: (ha-565823-m03) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:00:56.885764  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:56.885674  106823 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:56.885859  106017 main.go:141] libmachine: (ha-565823-m03) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:00:57.157670  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.157529  106823 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa...
	I1212 00:00:57.207576  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207455  106823 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk...
	I1212 00:00:57.207627  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing magic tar header
	I1212 00:00:57.207643  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing SSH key tar header
	I1212 00:00:57.207726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207648  106823 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:57.207776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03
	I1212 00:00:57.207803  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 (perms=drwx------)
	I1212 00:00:57.207814  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:00:57.207826  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:57.207832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:00:57.207841  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:00:57.207846  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:00:57.207853  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home
	I1212 00:00:57.207859  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Skipping /home - not owner
	I1212 00:00:57.207869  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:00:57.207875  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:00:57.207903  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:00:57.207923  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:00:57.207937  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:00:57.207945  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:57.208764  106017 main.go:141] libmachine: (ha-565823-m03) define libvirt domain using xml: 
	I1212 00:00:57.208779  106017 main.go:141] libmachine: (ha-565823-m03) <domain type='kvm'>
	I1212 00:00:57.208785  106017 main.go:141] libmachine: (ha-565823-m03)   <name>ha-565823-m03</name>
	I1212 00:00:57.208790  106017 main.go:141] libmachine: (ha-565823-m03)   <memory unit='MiB'>2200</memory>
	I1212 00:00:57.208795  106017 main.go:141] libmachine: (ha-565823-m03)   <vcpu>2</vcpu>
	I1212 00:00:57.208799  106017 main.go:141] libmachine: (ha-565823-m03)   <features>
	I1212 00:00:57.208803  106017 main.go:141] libmachine: (ha-565823-m03)     <acpi/>
	I1212 00:00:57.208807  106017 main.go:141] libmachine: (ha-565823-m03)     <apic/>
	I1212 00:00:57.208816  106017 main.go:141] libmachine: (ha-565823-m03)     <pae/>
	I1212 00:00:57.208827  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.208832  106017 main.go:141] libmachine: (ha-565823-m03)   </features>
	I1212 00:00:57.208837  106017 main.go:141] libmachine: (ha-565823-m03)   <cpu mode='host-passthrough'>
	I1212 00:00:57.208849  106017 main.go:141] libmachine: (ha-565823-m03)   
	I1212 00:00:57.208858  106017 main.go:141] libmachine: (ha-565823-m03)   </cpu>
	I1212 00:00:57.208866  106017 main.go:141] libmachine: (ha-565823-m03)   <os>
	I1212 00:00:57.208875  106017 main.go:141] libmachine: (ha-565823-m03)     <type>hvm</type>
	I1212 00:00:57.208882  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='cdrom'/>
	I1212 00:00:57.208899  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='hd'/>
	I1212 00:00:57.208912  106017 main.go:141] libmachine: (ha-565823-m03)     <bootmenu enable='no'/>
	I1212 00:00:57.208918  106017 main.go:141] libmachine: (ha-565823-m03)   </os>
	I1212 00:00:57.208926  106017 main.go:141] libmachine: (ha-565823-m03)   <devices>
	I1212 00:00:57.208933  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='cdrom'>
	I1212 00:00:57.208946  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/boot2docker.iso'/>
	I1212 00:00:57.208957  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hdc' bus='scsi'/>
	I1212 00:00:57.208964  106017 main.go:141] libmachine: (ha-565823-m03)       <readonly/>
	I1212 00:00:57.208971  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.208981  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='disk'>
	I1212 00:00:57.208993  106017 main.go:141] libmachine: (ha-565823-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:00:57.209040  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk'/>
	I1212 00:00:57.209066  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hda' bus='virtio'/>
	I1212 00:00:57.209075  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.209092  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209105  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='mk-ha-565823'/>
	I1212 00:00:57.209114  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209125  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209136  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209145  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='default'/>
	I1212 00:00:57.209155  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209164  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209179  106017 main.go:141] libmachine: (ha-565823-m03)     <serial type='pty'>
	I1212 00:00:57.209191  106017 main.go:141] libmachine: (ha-565823-m03)       <target port='0'/>
	I1212 00:00:57.209198  106017 main.go:141] libmachine: (ha-565823-m03)     </serial>
	I1212 00:00:57.209211  106017 main.go:141] libmachine: (ha-565823-m03)     <console type='pty'>
	I1212 00:00:57.209219  106017 main.go:141] libmachine: (ha-565823-m03)       <target type='serial' port='0'/>
	I1212 00:00:57.209228  106017 main.go:141] libmachine: (ha-565823-m03)     </console>
	I1212 00:00:57.209238  106017 main.go:141] libmachine: (ha-565823-m03)     <rng model='virtio'>
	I1212 00:00:57.209275  106017 main.go:141] libmachine: (ha-565823-m03)       <backend model='random'>/dev/random</backend>
	I1212 00:00:57.209299  106017 main.go:141] libmachine: (ha-565823-m03)     </rng>
	I1212 00:00:57.209310  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209316  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209327  106017 main.go:141] libmachine: (ha-565823-m03)   </devices>
	I1212 00:00:57.209344  106017 main.go:141] libmachine: (ha-565823-m03) </domain>
	I1212 00:00:57.209358  106017 main.go:141] libmachine: (ha-565823-m03) 
	I1212 00:00:57.216296  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:a0:11:b6 in network default
	I1212 00:00:57.216833  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring networks are active...
	I1212 00:00:57.216849  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:57.217611  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network default is active
	I1212 00:00:57.217884  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network mk-ha-565823 is active
	I1212 00:00:57.218224  106017 main.go:141] libmachine: (ha-565823-m03) Getting domain xml...
	I1212 00:00:57.218920  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:58.452742  106017 main.go:141] libmachine: (ha-565823-m03) Waiting to get IP...
	I1212 00:00:58.453425  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.453790  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.453832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.453785  106823 retry.go:31] will retry after 272.104158ms: waiting for machine to come up
	I1212 00:00:58.727281  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.727898  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.727928  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.727841  106823 retry.go:31] will retry after 285.622453ms: waiting for machine to come up
	I1212 00:00:59.015493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.016037  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.016069  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.015997  106823 retry.go:31] will retry after 462.910385ms: waiting for machine to come up
	I1212 00:00:59.480661  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.481128  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.481154  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.481091  106823 retry.go:31] will retry after 428.639733ms: waiting for machine to come up
	I1212 00:00:59.911938  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.912474  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.912505  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.912415  106823 retry.go:31] will retry after 493.229639ms: waiting for machine to come up
	I1212 00:01:00.406997  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:00.407456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:00.407482  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:00.407400  106823 retry.go:31] will retry after 633.230425ms: waiting for machine to come up
	I1212 00:01:01.042449  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:01.042884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:01.042905  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:01.042838  106823 retry.go:31] will retry after 978.049608ms: waiting for machine to come up
	I1212 00:01:02.022776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:02.023212  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:02.023245  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:02.023153  106823 retry.go:31] will retry after 1.111513755s: waiting for machine to come up
	I1212 00:01:03.136308  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:03.136734  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:03.136763  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:03.136679  106823 retry.go:31] will retry after 1.728462417s: waiting for machine to come up
	I1212 00:01:04.867619  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:04.868118  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:04.868157  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:04.868052  106823 retry.go:31] will retry after 1.898297589s: waiting for machine to come up
	I1212 00:01:06.769272  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:06.769757  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:06.769825  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:06.769731  106823 retry.go:31] will retry after 1.922578081s: waiting for machine to come up
	I1212 00:01:08.693477  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:08.693992  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:08.694026  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:08.693918  106823 retry.go:31] will retry after 2.235570034s: waiting for machine to come up
	I1212 00:01:10.932341  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:10.932805  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:10.932827  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:10.932750  106823 retry.go:31] will retry after 4.200404272s: waiting for machine to come up
	I1212 00:01:15.136581  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:15.136955  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:15.136979  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:15.136906  106823 retry.go:31] will retry after 4.331994391s: waiting for machine to come up
	I1212 00:01:19.472184  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472659  106017 main.go:141] libmachine: (ha-565823-m03) Found IP for machine: 192.168.39.95
	I1212 00:01:19.472679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472686  106017 main.go:141] libmachine: (ha-565823-m03) Reserving static IP address...
	I1212 00:01:19.473105  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find host DHCP lease matching {name: "ha-565823-m03", mac: "52:54:00:03:bd:55", ip: "192.168.39.95"} in network mk-ha-565823
	I1212 00:01:19.544988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Getting to WaitForSSH function...
	I1212 00:01:19.545019  106017 main.go:141] libmachine: (ha-565823-m03) Reserved static IP address: 192.168.39.95
	I1212 00:01:19.545082  106017 main.go:141] libmachine: (ha-565823-m03) Waiting for SSH to be available...
	I1212 00:01:19.547914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548457  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.548493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548645  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH client type: external
	I1212 00:01:19.548672  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa (-rw-------)
	I1212 00:01:19.548700  106017 main.go:141] libmachine: (ha-565823-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:01:19.548714  106017 main.go:141] libmachine: (ha-565823-m03) DBG | About to run SSH command:
	I1212 00:01:19.548726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | exit 0
	I1212 00:01:19.675749  106017 main.go:141] libmachine: (ha-565823-m03) DBG | SSH cmd err, output: <nil>: 
	I1212 00:01:19.676029  106017 main.go:141] libmachine: (ha-565823-m03) KVM machine creation complete!
	I1212 00:01:19.676360  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:19.676900  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677088  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677296  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:01:19.677311  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetState
	I1212 00:01:19.678472  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:01:19.678488  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:01:19.678497  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:01:19.678505  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.680612  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.680988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.681021  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.681172  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.681326  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681449  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681545  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.681635  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.681832  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.681842  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:01:19.794939  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:19.794969  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:01:19.794980  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.797552  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.797884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.797916  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.798040  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.798220  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798369  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798507  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.798667  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.798834  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.798844  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:01:19.912451  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:01:19.912540  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:01:19.912555  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:01:19.912568  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912805  106017 buildroot.go:166] provisioning hostname "ha-565823-m03"
	I1212 00:01:19.912831  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912939  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.915606  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916027  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.916059  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916213  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.916386  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916533  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916630  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.916776  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.917012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.917027  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m03 && echo "ha-565823-m03" | sudo tee /etc/hostname
	I1212 00:01:20.047071  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m03
	
	I1212 00:01:20.047100  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.049609  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050009  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.050034  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050209  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.050389  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050537  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050700  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.050854  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.051086  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.051105  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:01:20.174838  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:20.174877  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:01:20.174898  106017 buildroot.go:174] setting up certificates
	I1212 00:01:20.174909  106017 provision.go:84] configureAuth start
	I1212 00:01:20.174924  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:20.175232  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.177664  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178007  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.178038  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178124  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.180472  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180778  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.180806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180963  106017 provision.go:143] copyHostCerts
	I1212 00:01:20.180995  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181046  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:01:20.181058  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181146  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:01:20.181242  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181266  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:01:20.181279  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181315  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:01:20.181387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181413  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:01:20.181419  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181456  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:01:20.181524  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m03 san=[127.0.0.1 192.168.39.95 ha-565823-m03 localhost minikube]
	I1212 00:01:20.442822  106017 provision.go:177] copyRemoteCerts
	I1212 00:01:20.442883  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:01:20.442916  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.445614  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.445950  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.445983  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.446122  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.446304  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.446460  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.446571  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.533808  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:01:20.533894  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:01:20.558631  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:01:20.558695  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:01:20.584088  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:01:20.584173  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:01:20.608061  106017 provision.go:87] duration metric: took 433.135165ms to configureAuth
	I1212 00:01:20.608090  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:01:20.608294  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:20.608371  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.611003  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611319  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.611348  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611489  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.611709  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.611885  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.612026  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.612174  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.612326  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.612341  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:01:20.847014  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:01:20.847049  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:01:20.847062  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetURL
	I1212 00:01:20.848448  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using libvirt version 6000000
	I1212 00:01:20.850813  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851216  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.851246  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851443  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:01:20.851459  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:01:20.851469  106017 client.go:171] duration metric: took 23.968343391s to LocalClient.Create
	I1212 00:01:20.851499  106017 start.go:167] duration metric: took 23.968416391s to libmachine.API.Create "ha-565823"
	I1212 00:01:20.851513  106017 start.go:293] postStartSetup for "ha-565823-m03" (driver="kvm2")
	I1212 00:01:20.851525  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:01:20.851547  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:20.851812  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:01:20.851848  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.854066  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854470  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.854498  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854683  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.854881  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.855047  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.855202  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.942769  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:01:20.947268  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:01:20.947295  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:01:20.947350  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:01:20.947427  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:01:20.947438  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:01:20.947517  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:01:20.957067  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:20.982552  106017 start.go:296] duration metric: took 131.024484ms for postStartSetup
	I1212 00:01:20.982610  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:20.983169  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.985456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.985914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.985943  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.986219  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:01:20.986450  106017 start.go:128] duration metric: took 24.12157496s to createHost
	I1212 00:01:20.986480  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.988832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989169  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.989192  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989296  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.989476  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989596  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989695  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.989852  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.990012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.990022  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:01:21.104340  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961681.076284817
	
	I1212 00:01:21.104366  106017 fix.go:216] guest clock: 1733961681.076284817
	I1212 00:01:21.104376  106017 fix.go:229] Guest: 2024-12-12 00:01:21.076284817 +0000 UTC Remote: 2024-12-12 00:01:20.986466192 +0000 UTC m=+151.148293246 (delta=89.818625ms)
	I1212 00:01:21.104397  106017 fix.go:200] guest clock delta is within tolerance: 89.818625ms
	I1212 00:01:21.104403  106017 start.go:83] releasing machines lock for "ha-565823-m03", held for 24.239651482s
	I1212 00:01:21.104427  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.104703  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:21.107255  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.107654  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.107680  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.109803  106017 out.go:177] * Found network options:
	I1212 00:01:21.111036  106017 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.103
	W1212 00:01:21.112272  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.112293  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.112306  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112787  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112963  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.113063  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:01:21.113107  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	W1212 00:01:21.113169  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.113192  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.113246  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:01:21.113266  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:21.115806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.115895  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116242  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116269  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116313  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116334  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116399  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116570  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116593  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116694  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116713  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116861  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116856  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.116989  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.354040  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:01:21.360555  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:01:21.360632  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:01:21.379750  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:01:21.379780  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:01:21.379863  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:01:21.395389  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:01:21.409350  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:01:21.409431  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:01:21.425472  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:01:21.440472  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:01:21.567746  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:01:21.711488  106017 docker.go:233] disabling docker service ...
	I1212 00:01:21.711577  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:01:21.727302  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:01:21.740916  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:01:21.878118  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:01:22.013165  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:01:22.031377  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:01:22.050768  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:01:22.050841  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.062469  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:01:22.062542  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.074854  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.085834  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.096567  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:01:22.110009  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.121122  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.139153  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.150221  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:01:22.160252  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:01:22.160329  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:01:22.175082  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:01:22.185329  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:22.327197  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:01:22.421776  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:01:22.421853  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:01:22.427874  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:01:22.427937  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:01:22.432412  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:01:22.478561  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:01:22.478659  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.507894  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.541025  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:01:22.542600  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:01:22.544205  106017 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.103
	I1212 00:01:22.545527  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:22.548679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549115  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:22.549143  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549402  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:01:22.553987  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:22.567227  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:01:22.567647  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:22.568059  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.568178  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.583960  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I1212 00:01:22.584451  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.584977  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.585002  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.585378  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.585624  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:01:22.587277  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:22.587636  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.587686  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.602128  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1212 00:01:22.602635  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.603141  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.603163  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.603490  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.603676  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:22.603824  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.95
	I1212 00:01:22.603837  106017 certs.go:194] generating shared ca certs ...
	I1212 00:01:22.603856  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.603989  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:01:22.604025  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:01:22.604035  106017 certs.go:256] generating profile certs ...
	I1212 00:01:22.604113  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:01:22.604138  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c
	I1212 00:01:22.604153  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.95 192.168.39.254]
	I1212 00:01:22.747110  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c ...
	I1212 00:01:22.747151  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c: {Name:mke6cc66706783f55b7ebb6ba30cc07d7c6eb29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747333  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c ...
	I1212 00:01:22.747345  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c: {Name:mk0abaf339db164c799eddef60276ad5fb5ed33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747431  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:01:22.747642  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:01:22.747827  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:01:22.747853  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:01:22.747874  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:01:22.747894  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:01:22.747911  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:01:22.747929  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:01:22.747949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:01:22.747967  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:01:22.767751  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:01:22.767871  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:01:22.767924  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:01:22.767939  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:01:22.767972  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:01:22.768009  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:01:22.768041  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:01:22.768088  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:22.768123  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:22.768140  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:01:22.768153  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:01:22.768246  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:22.771620  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772074  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:22.772105  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:22.772487  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:22.772661  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:22.772805  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:22.855976  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:01:22.862422  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:01:22.875336  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:01:22.881430  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:01:22.892620  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:01:22.897804  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:01:22.910746  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:01:22.916511  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:01:22.927437  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:01:22.932403  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:01:22.945174  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:01:22.949699  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:01:22.963425  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:01:22.991332  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:01:23.014716  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:01:23.038094  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:01:23.062120  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1212 00:01:23.086604  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:01:23.110420  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:01:23.136037  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:01:23.162577  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:01:23.188311  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:01:23.211713  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:01:23.235230  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:01:23.253375  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:01:23.271455  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:01:23.289505  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:01:23.307850  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:01:23.325848  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:01:23.344038  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:01:23.362393  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:01:23.368722  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:01:23.380405  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385472  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385534  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.392130  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:01:23.405241  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:01:23.418140  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422762  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422819  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.428754  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:01:23.441496  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:01:23.454394  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459170  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459227  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.465192  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:01:23.476720  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:01:23.481551  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:01:23.481615  106017 kubeadm.go:934] updating node {m03 192.168.39.95 8443 v1.31.2 crio true true} ...
	I1212 00:01:23.481715  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:01:23.481752  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:01:23.481784  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:01:23.499895  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:01:23.499971  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:01:23.500042  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.510617  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:01:23.510681  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.520696  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1212 00:01:23.520748  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:01:23.520697  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1212 00:01:23.520779  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520698  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:01:23.520844  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.520847  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520904  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.539476  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539619  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539628  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:01:23.539658  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:01:23.539704  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:01:23.539735  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:01:23.554300  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:01:23.554341  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:01:24.410276  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:01:24.421207  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 00:01:24.438691  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:01:24.456935  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:01:24.474104  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:01:24.478799  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:24.492116  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:24.635069  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:24.653898  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:24.654454  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:24.654529  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:24.669805  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 00:01:24.670391  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:24.671018  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:24.671047  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:24.671400  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:24.671580  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:24.671761  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:01:24.671883  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:01:24.671905  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:24.675034  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675479  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:24.675501  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675693  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:24.675871  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:24.676006  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:24.676127  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:24.845860  106017 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:24.845904  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I1212 00:01:47.124612  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (22.27867542s)
	I1212 00:01:47.124662  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:01:47.623528  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m03 minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:01:47.763869  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:01:47.919307  106017 start.go:319] duration metric: took 23.247542297s to joinCluster
	I1212 00:01:47.919407  106017 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:47.919784  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:47.920983  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:01:47.922471  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:48.195755  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:48.249445  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:01:48.249790  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:01:48.249881  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:01:48.250202  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:01:48.250300  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.250311  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.250329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.250338  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.255147  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:48.750647  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.750680  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.750691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.750699  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.755066  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:49.251152  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.251203  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.251216  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.251222  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.254927  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:49.751403  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.751424  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.751432  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.751436  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.754669  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.250595  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.250620  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.250629  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.250633  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.254009  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.254537  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:50.751206  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.751237  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.751250  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.751256  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.755159  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:51.250921  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.250950  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.250961  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.250967  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.255349  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:51.751245  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.751270  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.751283  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.751290  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.755162  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.250889  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.250916  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.250929  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.250935  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.254351  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.255115  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:52.750458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.750481  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.750492  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.750499  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.753763  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:53.251029  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.251058  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.251071  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.251077  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.256338  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:01:53.751364  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.751389  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.751401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.751414  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.754657  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.250629  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.250665  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.250675  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.250680  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.254457  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.255509  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:54.750450  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.750484  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.750496  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.750502  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.753928  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.251309  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.251338  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.251347  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.251351  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.254751  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.751050  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.751076  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.751089  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.751093  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.755810  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:56.250473  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.250504  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.250524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.250530  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.253711  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.751414  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.751435  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.751444  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.751449  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.755218  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.755864  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:57.251118  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.251142  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.251150  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.251154  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.254747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:57.750776  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.750806  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.750817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.750829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.754143  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.251295  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.251320  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.251329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.251333  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.254626  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.750576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.750599  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.750608  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.750611  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.754105  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.251173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.251200  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.251213  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.254355  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.255121  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:59.750953  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.750977  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.750985  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.750989  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.754627  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.250978  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.251004  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.251013  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.251016  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.254467  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.750877  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.750901  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.750912  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.750918  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.754221  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.251370  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.251393  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.251401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.251405  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.254805  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.255406  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:01.750655  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.750676  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.750684  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.750690  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.753736  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.251367  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.251390  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.251399  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.251403  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.255039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.750915  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.750948  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.750958  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.750964  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.754145  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:03.250760  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.250788  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.250798  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.250805  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.260534  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:03.261313  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:03.750548  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.750571  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.750582  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.750587  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.753887  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.250808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.250830  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.250838  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.250841  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.254163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.750428  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.750453  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.750464  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.750469  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.754235  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.251014  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.251038  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.251053  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.251061  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.254268  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.751257  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.751286  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.751300  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.751309  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.754346  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.755137  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:06.250474  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.250500  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.250510  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.250515  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.253901  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:06.751012  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.751043  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.751062  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.751067  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.755777  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:07.250458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.250481  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.250489  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.250494  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.254349  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.751140  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.751164  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.751172  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.751178  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.754545  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.755268  106017 node_ready.go:49] node "ha-565823-m03" has status "Ready":"True"
	I1212 00:02:07.755289  106017 node_ready.go:38] duration metric: took 19.505070997s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:02:07.755298  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:07.755371  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:07.755381  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.755388  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.755394  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.764865  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:07.771847  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.771957  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:02:07.771969  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.771979  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.771985  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.774662  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.775180  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.775197  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.775207  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.775212  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.778204  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.778657  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.778673  106017 pod_ready.go:82] duration metric: took 6.798091ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778684  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778739  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:02:07.778749  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.778759  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.778766  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.780968  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.781650  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.781667  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.781674  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.781679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.783908  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.784542  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.784564  106017 pod_ready.go:82] duration metric: took 5.872725ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784576  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784636  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:02:07.784644  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.784651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.784657  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.786892  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.787666  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.787681  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.787688  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.787694  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.789880  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.790470  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.790486  106017 pod_ready.go:82] duration metric: took 5.899971ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790494  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790537  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:02:07.790545  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.790552  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.790555  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.793137  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.793764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:07.793781  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.793791  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.793799  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.796241  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.796610  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.796625  106017 pod_ready.go:82] duration metric: took 6.124204ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.796636  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.952109  106017 request.go:632] Waited for 155.381921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952174  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952179  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.952187  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.952193  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.955641  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.151556  106017 request.go:632] Waited for 195.239119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151668  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151684  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.151694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.151702  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.154961  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.155639  106017 pod_ready.go:93] pod "etcd-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.155660  106017 pod_ready.go:82] duration metric: took 359.016335ms for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.155677  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.351679  106017 request.go:632] Waited for 195.932687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351780  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351790  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.351808  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.351821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.355049  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.552214  106017 request.go:632] Waited for 196.357688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552278  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552283  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.552291  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.552295  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.555420  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.555971  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.555995  106017 pod_ready.go:82] duration metric: took 400.310286ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.556009  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.752055  106017 request.go:632] Waited for 195.936446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752134  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752141  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.752152  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.752161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.755742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.951367  106017 request.go:632] Waited for 194.249731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951449  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951462  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.951477  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.951487  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.956306  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:08.956889  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.956911  106017 pod_ready.go:82] duration metric: took 400.890038ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.956924  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.152049  106017 request.go:632] Waited for 195.045457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152139  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152145  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.152153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.152158  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.155700  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.351978  106017 request.go:632] Waited for 195.381489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352057  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352066  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.352075  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.352081  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.355842  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.356358  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.356379  106017 pod_ready.go:82] duration metric: took 399.447689ms for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.356389  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.551411  106017 request.go:632] Waited for 194.933011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551471  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551476  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.551485  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.551489  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.554894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.751755  106017 request.go:632] Waited for 196.244381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751835  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751841  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.751848  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.751854  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.754952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.755722  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.755745  106017 pod_ready.go:82] duration metric: took 399.345607ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.755761  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.951966  106017 request.go:632] Waited for 196.120958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952068  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952080  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.952092  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.952104  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.955804  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.152052  106017 request.go:632] Waited for 195.597395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152141  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152152  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.152161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.152166  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.155038  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:10.155549  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.155569  106017 pod_ready.go:82] duration metric: took 399.796008ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.155583  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.351722  106017 request.go:632] Waited for 196.013906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351803  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351811  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.351826  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.351837  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.355190  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.551684  106017 request.go:632] Waited for 195.377569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551816  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.551824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.551829  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.555651  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.556178  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.556199  106017 pod_ready.go:82] duration metric: took 400.605936ms for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.556213  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.751531  106017 request.go:632] Waited for 195.242482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751632  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751654  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.751669  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.751679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.755253  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.951536  106017 request.go:632] Waited for 195.352907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951607  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951622  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.951633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.951641  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.954707  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.955175  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.955193  106017 pod_ready.go:82] duration metric: took 398.973413ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.955204  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.151212  106017 request.go:632] Waited for 195.914198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151269  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151274  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.151282  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.151285  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.154675  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.351669  106017 request.go:632] Waited for 196.350446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351765  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351776  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.351788  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.351796  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.354976  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.355603  106017 pod_ready.go:93] pod "kube-proxy-klpqs" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.355620  106017 pod_ready.go:82] duration metric: took 400.410567ms for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.355631  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.551803  106017 request.go:632] Waited for 196.076188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551880  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551892  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.551903  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.551915  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.555786  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.751843  106017 request.go:632] Waited for 195.375551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751907  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751912  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.751919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.751924  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.755210  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.755911  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.755936  106017 pod_ready.go:82] duration metric: took 400.297319ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.755951  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.951789  106017 request.go:632] Waited for 195.74885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951866  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951874  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.951891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.951904  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.955633  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.152006  106017 request.go:632] Waited for 195.692099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152097  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152112  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.152120  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.152125  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.155247  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.155984  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.156005  106017 pod_ready.go:82] duration metric: took 400.045384ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.156015  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.352045  106017 request.go:632] Waited for 195.938605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352121  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352126  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.352134  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.352143  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.355894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.551904  106017 request.go:632] Waited for 195.351995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551970  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551977  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.551988  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.551993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.555652  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.556289  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.556309  106017 pod_ready.go:82] duration metric: took 400.287227ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.556319  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.751148  106017 request.go:632] Waited for 194.747976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751231  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.751244  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.751260  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.754576  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.951572  106017 request.go:632] Waited for 196.386091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951678  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.951689  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.951693  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.954814  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.955311  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.955329  106017 pod_ready.go:82] duration metric: took 398.995551ms for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.955348  106017 pod_ready.go:39] duration metric: took 5.200033872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:12.955369  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:02:12.955437  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:02:12.971324  106017 api_server.go:72] duration metric: took 25.051879033s to wait for apiserver process to appear ...
	I1212 00:02:12.971354  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:02:12.971379  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:02:12.977750  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:02:12.977832  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:02:12.977843  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.977856  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.977863  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.978833  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:02:12.978904  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:02:12.978918  106017 api_server.go:131] duration metric: took 7.558877ms to wait for apiserver health ...
	I1212 00:02:12.978926  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:02:13.151199  106017 request.go:632] Waited for 172.198927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151292  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151303  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.151316  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.151325  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.157197  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:02:13.164153  106017 system_pods.go:59] 24 kube-system pods found
	I1212 00:02:13.164182  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.164187  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.164191  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.164194  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.164197  106017 system_pods.go:61] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.164200  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.164203  106017 system_pods.go:61] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.164206  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.164209  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.164211  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.164214  106017 system_pods.go:61] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.164218  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.164221  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.164224  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.164227  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.164230  106017 system_pods.go:61] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.164233  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.164236  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.164240  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.164243  106017 system_pods.go:61] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.164246  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.164249  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.164251  106017 system_pods.go:61] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.164254  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.164259  106017 system_pods.go:74] duration metric: took 185.327636ms to wait for pod list to return data ...
	I1212 00:02:13.164271  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:02:13.351702  106017 request.go:632] Waited for 187.33366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351785  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351793  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.351804  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.351814  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.355589  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.355716  106017 default_sa.go:45] found service account: "default"
	I1212 00:02:13.355732  106017 default_sa.go:55] duration metric: took 191.453257ms for default service account to be created ...
	I1212 00:02:13.355741  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:02:13.552179  106017 request.go:632] Waited for 196.355674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552246  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552253  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.552265  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.552274  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.558546  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:02:13.567311  106017 system_pods.go:86] 24 kube-system pods found
	I1212 00:02:13.567335  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.567341  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.567345  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.567349  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.567352  106017 system_pods.go:89] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.567355  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.567359  106017 system_pods.go:89] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.567362  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.567366  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.567369  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.567373  106017 system_pods.go:89] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.567377  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.567380  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.567384  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.567387  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.567390  106017 system_pods.go:89] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.567393  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.567396  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.567400  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.567404  106017 system_pods.go:89] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.567406  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.567411  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.567416  106017 system_pods.go:89] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.567419  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.567425  106017 system_pods.go:126] duration metric: took 211.677185ms to wait for k8s-apps to be running ...
	I1212 00:02:13.567435  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:02:13.567479  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:02:13.584100  106017 system_svc.go:56] duration metric: took 16.645631ms WaitForService to wait for kubelet
	I1212 00:02:13.584137  106017 kubeadm.go:582] duration metric: took 25.664696546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:02:13.584164  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:02:13.751620  106017 request.go:632] Waited for 167.335283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751682  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751687  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.751694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.751707  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.755649  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.756501  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756522  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756532  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756535  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756538  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756541  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756545  106017 node_conditions.go:105] duration metric: took 172.375714ms to run NodePressure ...
	I1212 00:02:13.756565  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:02:13.756588  106017 start.go:255] writing updated cluster config ...
	I1212 00:02:13.756868  106017 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:13.808453  106017 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 00:02:13.810275  106017 out.go:177] * Done! kubectl is now configured to use "ha-565823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.213616917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961963213582806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7fa7b9d-abf7-446f-9718-4481d1f44170 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.214272611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eafe64d7-d768-460b-af18-f0161a3ac3fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.214353479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eafe64d7-d768-460b-af18-f0161a3ac3fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.214579455Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eafe64d7-d768-460b-af18-f0161a3ac3fe name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.266232894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a91b3ae-0418-429d-ae5c-eb8cc58d8584 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.266335229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a91b3ae-0418-429d-ae5c-eb8cc58d8584 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.268015219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65f9614a-e785-4ef8-9366-c9e9231838da name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.268739146Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961963268703197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65f9614a-e785-4ef8-9366-c9e9231838da name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.269761260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93e62289-b26f-41df-90b6-3a4eb3276e9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.269817967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93e62289-b26f-41df-90b6-3a4eb3276e9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.270135702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93e62289-b26f-41df-90b6-3a4eb3276e9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.311761362Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b554c3a-d25d-49bd-97c5-abc5c6635d02 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.311830092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b554c3a-d25d-49bd-97c5-abc5c6635d02 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.312859376Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db687617-6468-42a3-9e48-62a5e1aad2c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.313781832Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961963313757591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db687617-6468-42a3-9e48-62a5e1aad2c7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.314351817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0adc9eaf-ddcc-423b-8d12-041df7deea6c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.314400997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0adc9eaf-ddcc-423b-8d12-041df7deea6c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.314690610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0adc9eaf-ddcc-423b-8d12-041df7deea6c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.352433061Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5876bc4e-08f9-4392-afac-4a93aedd9099 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.352521098Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5876bc4e-08f9-4392-afac-4a93aedd9099 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.354158017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79ed0e23-4b87-45ef-aea6-7c6545d4c604 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.354653334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961963354628865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79ed0e23-4b87-45ef-aea6-7c6545d4c604 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.355384170Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f258148-95d0-49c9-8dac-ad27dfdc30cd name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.355458833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f258148-95d0-49c9-8dac-ad27dfdc30cd name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:03 ha-565823 crio[664]: time="2024-12-12 00:06:03.356004984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f258148-95d0-49c9-8dac-ad27dfdc30cd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0043af06cb92       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0d77818a442ce       busybox-7dff88458-x4p94
	999ac64245591       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ab4dd7022ef59       coredns-7c65d6cfc9-mqzbv
	0beb663c1a28f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   2787b4f317bfa       coredns-7c65d6cfc9-4q46c
	ba4c8c97ea090       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4161eb9de6ddb       storage-provisioner
	bfdacc6be0aee       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   332b05e74370f       kindnet-hz9rk
	514637eeaa812       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   920e405616cde       kube-proxy-hr5qc
	768be9c254101       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   87c6df22f8976       kube-vip-ha-565823
	452c6d19b2de9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   0ab557e831fb3       kube-controller-manager-ha-565823
	743ae8ccc81f5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e6c331c3b3439       etcd-ha-565823
	4f25ff314c2e8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d851e6de61a68       kube-apiserver-ha-565823
	b28e7b492cfe7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6c5b082d1924       kube-scheduler-ha-565823
	
	
	==> coredns [0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3] <==
	[INFO] 10.244.1.2:40894 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004450385s
	[INFO] 10.244.1.2:47929 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225565s
	[INFO] 10.244.1.2:51252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126773s
	[INFO] 10.244.1.2:47545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126535s
	[INFO] 10.244.1.2:37654 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119814s
	[INFO] 10.244.2.2:44808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015021s
	[INFO] 10.244.2.2:48775 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815223s
	[INFO] 10.244.2.2:56148 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132782s
	[INFO] 10.244.2.2:57998 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133493s
	[INFO] 10.244.0.4:39053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087907s
	[INFO] 10.244.0.4:34059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001091775s
	[INFO] 10.244.1.2:56415 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000835348s
	[INFO] 10.244.1.2:46751 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114147s
	[INFO] 10.244.1.2:35096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100606s
	[INFO] 10.244.2.2:40358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136169s
	[INFO] 10.244.2.2:56318 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204673s
	[INFO] 10.244.0.4:34528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012651s
	[INFO] 10.244.1.2:56678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145563s
	[INFO] 10.244.1.2:43671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000363816s
	[INFO] 10.244.1.2:48047 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136942s
	[INFO] 10.244.1.2:35425 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019653s
	[INFO] 10.244.2.2:59862 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112519s
	[INFO] 10.244.0.4:33935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108695s
	[INFO] 10.244.0.4:51044 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115709s
	[INFO] 10.244.0.4:40489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092799s
	
	
	==> coredns [999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481] <==
	[INFO] 10.244.0.4:33301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137834s
	[INFO] 10.244.0.4:55709 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001541208s
	[INFO] 10.244.0.4:59133 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001387137s
	[INFO] 10.244.1.2:35268 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004904013s
	[INFO] 10.244.1.2:45390 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166839s
	[INFO] 10.244.2.2:51385 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248421s
	[INFO] 10.244.2.2:33701 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001310625s
	[INFO] 10.244.2.2:48335 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124081s
	[INFO] 10.244.2.2:58439 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000278252s
	[INFO] 10.244.0.4:51825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131036s
	[INFO] 10.244.0.4:54179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001798071s
	[INFO] 10.244.0.4:38851 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094604s
	[INFO] 10.244.0.4:48660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050194s
	[INFO] 10.244.0.4:57598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082654s
	[INFO] 10.244.0.4:43576 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100662s
	[INFO] 10.244.1.2:60988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015105s
	[INFO] 10.244.2.2:60481 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130341s
	[INFO] 10.244.2.2:48427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079579s
	[INFO] 10.244.0.4:39760 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227961s
	[INFO] 10.244.0.4:48093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090061s
	[INFO] 10.244.0.4:37075 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076033s
	[INFO] 10.244.2.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258305s
	[INFO] 10.244.2.2:40866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177114s
	[INFO] 10.244.2.2:58880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137362s
	[INFO] 10.244.0.4:60821 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179152s
	
	
	==> describe nodes <==
	Name:               ha-565823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:59:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-565823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 344476ebea784ce5952c6b9d7486bfc2
	  System UUID:                344476eb-ea78-4ce5-952c-6b9d7486bfc2
	  Boot ID:                    cf8379f5-6946-439d-a3d4-fa7d39c2dea7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4p94              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 coredns-7c65d6cfc9-4q46c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 coredns-7c65d6cfc9-mqzbv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m23s
	  kube-system                 etcd-ha-565823                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m28s
	  kube-system                 kindnet-hz9rk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m24s
	  kube-system                 kube-apiserver-ha-565823             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-ha-565823    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-proxy-hr5qc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-scheduler-ha-565823             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-565823                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m21s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-565823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-565823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-565823 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m24s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  NodeReady                6m6s   kubelet          Node ha-565823 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  RegisteredNode           4m10s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	
	
	Name:               ha-565823-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:00:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:03:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-565823-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9273c598fccb4678bf93616ea428fab5
	  System UUID:                9273c598-fccb-4678-bf93-616ea428fab5
	  Boot ID:                    73eb7add-f6da-422d-ad45-9773172878c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nsw2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-565823-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m32s
	  kube-system                 kindnet-kr5js                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-565823-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-565823-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-p2lsd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-565823-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-vip-ha-565823-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-565823-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m29s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-565823-m02 status is now: NodeNotReady
	
	
	Name:               ha-565823-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:01:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:02:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-565823-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7cdc3cdb36e495abaa3ddda542ce8f6
	  System UUID:                a7cdc3cd-b36e-495a-baa3-ddda542ce8f6
	  Boot ID:                    e8069ced-7862-4741-8f56-298b003d0b4d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s8nmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 etcd-ha-565823-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-jffrr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-565823-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-565823-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-proxy-klpqs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-565823-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-565823-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-565823-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m14s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	
	
	Name:               ha-565823-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_02_54_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:02:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:05:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:03:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-565823-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9da6268e700e4cc18f576f10f66d598f
	  System UUID:                9da6268e-700e-4cc1-8f57-6f10f66d598f
	  Boot ID:                    20440ea1-d260-49fc-a678-9a23de1ac4f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6qk4d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m9s
	  kube-system                 kube-proxy-j59sb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m3s                 kube-proxy       
	  Normal  RegisteredNode           3m9s                 node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m9s (x2 over 3m9s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s (x2 over 3m9s)  kubelet          Node ha-565823-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s (x2 over 3m9s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-565823-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053078] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041942] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec11 23:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.625477] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.503596] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.061991] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056761] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.187047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.124910] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.280035] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.149659] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.048783] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.069316] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.737553] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.583447] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +5.823487] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.790300] kauditd_printk_skb: 34 callbacks suppressed
	[Dec12 00:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b] <==
	{"level":"warn","ts":"2024-12-12T00:06:03.556427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.648579Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.655707Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.659218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.664477Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.676182Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.681920Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.687577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.693102Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.696712Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.699868Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.704919Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.711147Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.718246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.721357Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.725020Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.732237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.738245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.747913Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.750961Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.753711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.756427Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.757132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.765100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:03.771658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:06:03 up 7 min,  0 users,  load average: 0.06, 0.17, 0.09
	Linux ha-565823 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098] <==
	I1212 00:05:27.120565       1 main.go:301] handling current node
	I1212 00:05:37.119553       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:37.119646       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:37.119990       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:37.120019       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:05:37.120347       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:37.120377       1 main.go:301] handling current node
	I1212 00:05:37.120407       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:37.120430       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119691       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:47.119737       1 main.go:301] handling current node
	I1212 00:05:47.119753       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:47.119758       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119987       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:47.119994       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:47.120217       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:47.120242       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:05:57.128438       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:57.128810       1 main.go:301] handling current node
	I1212 00:05:57.128927       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:57.128989       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:57.129767       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:57.129834       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:57.130023       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:57.130046       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95] <==
	I1211 23:59:33.823962       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1211 23:59:33.879965       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1211 23:59:33.896294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I1211 23:59:33.897349       1 controller.go:615] quota admission added evaluator for: endpoints
	I1211 23:59:33.902931       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 23:59:34.842734       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1211 23:59:35.374409       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1211 23:59:35.395837       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1211 23:59:35.560177       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1211 23:59:39.944410       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1211 23:59:40.344123       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1212 00:02:22.272920       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55802: use of closed network connection
	E1212 00:02:22.464756       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55828: use of closed network connection
	E1212 00:02:22.651355       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55850: use of closed network connection
	E1212 00:02:23.038043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55874: use of closed network connection
	E1212 00:02:23.226745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55900: use of closed network connection
	E1212 00:02:23.410000       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55904: use of closed network connection
	E1212 00:02:23.591256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55924: use of closed network connection
	E1212 00:02:23.770667       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55932: use of closed network connection
	E1212 00:02:24.076679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55962: use of closed network connection
	E1212 00:02:24.252739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55982: use of closed network connection
	E1212 00:02:24.461578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56012: use of closed network connection
	E1212 00:02:24.646238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56034: use of closed network connection
	E1212 00:02:24.817848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56044: use of closed network connection
	E1212 00:02:24.999617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56060: use of closed network connection
	
	
	==> kube-controller-manager [452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1] <==
	I1212 00:02:54.484626       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565823-m04" podCIDRs=["10.244.3.0/24"]
	I1212 00:02:54.484689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.484721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.500323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.636444       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565823-m04"
	I1212 00:02:54.652045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.687694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:55.082775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.485970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.555718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.675906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.734910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:04.836593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:03:16.485293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:17.501671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:25.341676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:04:14.668472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.669356       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:04:14.705380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.785686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.151428ms"
	I1212 00:04:14.785837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="78.406µs"
	I1212 00:04:18.764949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:19.939887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	
	
	==> kube-proxy [514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1211 23:59:41.687183       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1211 23:59:41.713699       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E1211 23:59:41.713883       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:59:41.760766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1211 23:59:41.760924       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:59:41.761009       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:59:41.764268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:59:41.765555       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:59:41.765710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:59:41.768630       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:59:41.769016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:59:41.769876       1 config.go:199] "Starting service config controller"
	I1211 23:59:41.769889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:59:41.771229       1 config.go:328] "Starting node config controller"
	I1211 23:59:41.771259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:59:41.871443       1 shared_informer.go:320] Caches are synced for node config
	I1211 23:59:41.871633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:59:41.871849       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4] <==
	E1211 23:59:33.413263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1211 23:59:35.297693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:02:14.658309       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="bc1a3365-d32e-42cc-b58c-95a59e72d54b" pod="default/busybox-7dff88458-nsw2n" assumedNode="ha-565823-m02" currentNode="ha-565823-m03"
	E1212 00:02:14.675240       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m03"
	E1212 00:02:14.679553       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bc1a3365-d32e-42cc-b58c-95a59e72d54b(default/busybox-7dff88458-nsw2n) was assumed on ha-565823-m03 but assigned to ha-565823-m02" pod="default/busybox-7dff88458-nsw2n"
	E1212 00:02:14.680513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" pod="default/busybox-7dff88458-nsw2n"
	I1212 00:02:14.680708       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m02"
	E1212 00:02:14.899144       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-vn6xg is already present in the active queue" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:14.936687       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-vn6xg\" not found" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:54.574668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.578200       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.581395       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b52adb65-9292-42b8-bca8-b4a44c756e15(kube-system/kube-proxy-j59sb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j59sb"
	E1212 00:02:54.582857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-j59sb"
	I1212 00:02:54.582977       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.583674       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ba90dda-f093-4ba3-abad-427394ebe334(kube-system/kindnet-6qk4d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6qk4d"
	E1212 00:02:54.583943       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-6qk4d"
	I1212 00:02:54.584002       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.639291       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.640439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2061489e-9108-4e76-af40-2fcc1540357b(kube-system/kube-proxy-lbbhs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lbbhs"
	E1212 00:02:54.640623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-lbbhs"
	I1212 00:02:54.640743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.639802       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	E1212 00:02:54.641599       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bd86f21-f17e-4d19-8bac-53393aecda0b(kube-system/kindnet-pfdgd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pfdgd"
	E1212 00:02:54.641728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-pfdgd"
	I1212 00:02:54.641865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	
	
	==> kubelet <==
	Dec 12 00:04:35 ha-565823 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 00:04:35 ha-565823 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 00:04:35 ha-565823 kubelet[1304]: E1212 00:04:35.644561    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961875641522910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:35 ha-565823 kubelet[1304]: E1212 00:04:35.644914    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961875641522910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646672    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646986    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649177    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649229    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650905    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650951    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652272    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652343    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.654671    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.655016    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.529805    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657687    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657712    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659792    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659845    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.661887    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.662031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565823 -n ha-565823
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr: (3.846283272s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565823 -n ha-565823
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 logs -n 25: (1.435029235s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m03_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m04 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp testdata/cp-test.txt                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m03 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565823 node stop m02 -v=7                                                     | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565823 node start m02 -v=7                                                    | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:58:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:58:49.879098  106017 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:58:49.879215  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879223  106017 out.go:358] Setting ErrFile to fd 2...
	I1211 23:58:49.879228  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879424  106017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:58:49.880067  106017 out.go:352] Setting JSON to false
	I1211 23:58:49.880934  106017 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9672,"bootTime":1733951858,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:58:49.881036  106017 start.go:139] virtualization: kvm guest
	I1211 23:58:49.883482  106017 out.go:177] * [ha-565823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:58:49.884859  106017 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:58:49.884853  106017 notify.go:220] Checking for updates...
	I1211 23:58:49.887649  106017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:58:49.889057  106017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:58:49.890422  106017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:49.891732  106017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:58:49.893196  106017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:58:49.894834  106017 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:58:49.929647  106017 out.go:177] * Using the kvm2 driver based on user configuration
	I1211 23:58:49.931090  106017 start.go:297] selected driver: kvm2
	I1211 23:58:49.931102  106017 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:58:49.931118  106017 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:58:49.931896  106017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.931980  106017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:58:49.946877  106017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:58:49.946925  106017 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:58:49.947184  106017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:58:49.947219  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:58:49.947291  106017 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1211 23:58:49.947306  106017 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:58:49.947387  106017 start.go:340] cluster config:
	{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1211 23:58:49.947534  106017 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.949244  106017 out.go:177] * Starting "ha-565823" primary control-plane node in "ha-565823" cluster
	I1211 23:58:49.950461  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:58:49.950504  106017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:58:49.950517  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:58:49.950593  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:58:49.950607  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:58:49.950924  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:58:49.950947  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json: {Name:mk87ab89a0730849be8d507f8c0453b4c014ad9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:58:49.951100  106017 start.go:360] acquireMachinesLock for ha-565823: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:58:49.951143  106017 start.go:364] duration metric: took 25.725µs to acquireMachinesLock for "ha-565823"
	I1211 23:58:49.951167  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:58:49.951248  106017 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:58:49.952920  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:58:49.953077  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:49.953130  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:49.967497  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I1211 23:58:49.967981  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:49.968550  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:58:49.968587  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:49.968981  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:49.969194  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:58:49.969410  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:58:49.969566  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:58:49.969614  106017 client.go:168] LocalClient.Create starting
	I1211 23:58:49.969660  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:58:49.969702  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969727  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969804  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:58:49.969833  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969852  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969875  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:58:49.969887  106017 main.go:141] libmachine: (ha-565823) Calling .PreCreateCheck
	I1211 23:58:49.970228  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:58:49.970579  106017 main.go:141] libmachine: Creating machine...
	I1211 23:58:49.970592  106017 main.go:141] libmachine: (ha-565823) Calling .Create
	I1211 23:58:49.970720  106017 main.go:141] libmachine: (ha-565823) Creating KVM machine...
	I1211 23:58:49.971894  106017 main.go:141] libmachine: (ha-565823) DBG | found existing default KVM network
	I1211 23:58:49.972543  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:49.972397  106042 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1211 23:58:49.972595  106017 main.go:141] libmachine: (ha-565823) DBG | created network xml: 
	I1211 23:58:49.972612  106017 main.go:141] libmachine: (ha-565823) DBG | <network>
	I1211 23:58:49.972619  106017 main.go:141] libmachine: (ha-565823) DBG |   <name>mk-ha-565823</name>
	I1211 23:58:49.972628  106017 main.go:141] libmachine: (ha-565823) DBG |   <dns enable='no'/>
	I1211 23:58:49.972641  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972653  106017 main.go:141] libmachine: (ha-565823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1211 23:58:49.972659  106017 main.go:141] libmachine: (ha-565823) DBG |     <dhcp>
	I1211 23:58:49.972666  106017 main.go:141] libmachine: (ha-565823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1211 23:58:49.972678  106017 main.go:141] libmachine: (ha-565823) DBG |     </dhcp>
	I1211 23:58:49.972689  106017 main.go:141] libmachine: (ha-565823) DBG |   </ip>
	I1211 23:58:49.972696  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972705  106017 main.go:141] libmachine: (ha-565823) DBG | </network>
	I1211 23:58:49.972742  106017 main.go:141] libmachine: (ha-565823) DBG | 
	I1211 23:58:49.977592  106017 main.go:141] libmachine: (ha-565823) DBG | trying to create private KVM network mk-ha-565823 192.168.39.0/24...
	I1211 23:58:50.045920  106017 main.go:141] libmachine: (ha-565823) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.045945  106017 main.go:141] libmachine: (ha-565823) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:58:50.045957  106017 main.go:141] libmachine: (ha-565823) DBG | private KVM network mk-ha-565823 192.168.39.0/24 created
	I1211 23:58:50.045974  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.045851  106042 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.046037  106017 main.go:141] libmachine: (ha-565823) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:58:50.332532  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.332355  106042 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa...
	I1211 23:58:50.607374  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607211  106042 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk...
	I1211 23:58:50.607405  106017 main.go:141] libmachine: (ha-565823) DBG | Writing magic tar header
	I1211 23:58:50.607418  106017 main.go:141] libmachine: (ha-565823) DBG | Writing SSH key tar header
	I1211 23:58:50.607425  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607336  106042 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.607436  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823
	I1211 23:58:50.607514  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:58:50.607560  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 (perms=drwx------)
	I1211 23:58:50.607571  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.607581  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:58:50.607606  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:58:50.607624  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:58:50.607642  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:58:50.607654  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home
	I1211 23:58:50.607666  106017 main.go:141] libmachine: (ha-565823) DBG | Skipping /home - not owner
	I1211 23:58:50.607678  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:58:50.607687  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:58:50.607693  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:58:50.607704  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:58:50.607717  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:50.608802  106017 main.go:141] libmachine: (ha-565823) define libvirt domain using xml: 
	I1211 23:58:50.608821  106017 main.go:141] libmachine: (ha-565823) <domain type='kvm'>
	I1211 23:58:50.608828  106017 main.go:141] libmachine: (ha-565823)   <name>ha-565823</name>
	I1211 23:58:50.608832  106017 main.go:141] libmachine: (ha-565823)   <memory unit='MiB'>2200</memory>
	I1211 23:58:50.608838  106017 main.go:141] libmachine: (ha-565823)   <vcpu>2</vcpu>
	I1211 23:58:50.608842  106017 main.go:141] libmachine: (ha-565823)   <features>
	I1211 23:58:50.608846  106017 main.go:141] libmachine: (ha-565823)     <acpi/>
	I1211 23:58:50.608850  106017 main.go:141] libmachine: (ha-565823)     <apic/>
	I1211 23:58:50.608857  106017 main.go:141] libmachine: (ha-565823)     <pae/>
	I1211 23:58:50.608868  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.608875  106017 main.go:141] libmachine: (ha-565823)   </features>
	I1211 23:58:50.608879  106017 main.go:141] libmachine: (ha-565823)   <cpu mode='host-passthrough'>
	I1211 23:58:50.608887  106017 main.go:141] libmachine: (ha-565823)   
	I1211 23:58:50.608891  106017 main.go:141] libmachine: (ha-565823)   </cpu>
	I1211 23:58:50.608898  106017 main.go:141] libmachine: (ha-565823)   <os>
	I1211 23:58:50.608902  106017 main.go:141] libmachine: (ha-565823)     <type>hvm</type>
	I1211 23:58:50.608977  106017 main.go:141] libmachine: (ha-565823)     <boot dev='cdrom'/>
	I1211 23:58:50.609011  106017 main.go:141] libmachine: (ha-565823)     <boot dev='hd'/>
	I1211 23:58:50.609024  106017 main.go:141] libmachine: (ha-565823)     <bootmenu enable='no'/>
	I1211 23:58:50.609036  106017 main.go:141] libmachine: (ha-565823)   </os>
	I1211 23:58:50.609043  106017 main.go:141] libmachine: (ha-565823)   <devices>
	I1211 23:58:50.609052  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='cdrom'>
	I1211 23:58:50.609063  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/boot2docker.iso'/>
	I1211 23:58:50.609074  106017 main.go:141] libmachine: (ha-565823)       <target dev='hdc' bus='scsi'/>
	I1211 23:58:50.609085  106017 main.go:141] libmachine: (ha-565823)       <readonly/>
	I1211 23:58:50.609094  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609105  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='disk'>
	I1211 23:58:50.609117  106017 main.go:141] libmachine: (ha-565823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:58:50.609133  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk'/>
	I1211 23:58:50.609144  106017 main.go:141] libmachine: (ha-565823)       <target dev='hda' bus='virtio'/>
	I1211 23:58:50.609154  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609164  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609176  106017 main.go:141] libmachine: (ha-565823)       <source network='mk-ha-565823'/>
	I1211 23:58:50.609187  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609198  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609209  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609221  106017 main.go:141] libmachine: (ha-565823)       <source network='default'/>
	I1211 23:58:50.609230  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609240  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609249  106017 main.go:141] libmachine: (ha-565823)     <serial type='pty'>
	I1211 23:58:50.609271  106017 main.go:141] libmachine: (ha-565823)       <target port='0'/>
	I1211 23:58:50.609292  106017 main.go:141] libmachine: (ha-565823)     </serial>
	I1211 23:58:50.609319  106017 main.go:141] libmachine: (ha-565823)     <console type='pty'>
	I1211 23:58:50.609342  106017 main.go:141] libmachine: (ha-565823)       <target type='serial' port='0'/>
	I1211 23:58:50.609358  106017 main.go:141] libmachine: (ha-565823)     </console>
	I1211 23:58:50.609368  106017 main.go:141] libmachine: (ha-565823)     <rng model='virtio'>
	I1211 23:58:50.609380  106017 main.go:141] libmachine: (ha-565823)       <backend model='random'>/dev/random</backend>
	I1211 23:58:50.609388  106017 main.go:141] libmachine: (ha-565823)     </rng>
	I1211 23:58:50.609393  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609399  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609404  106017 main.go:141] libmachine: (ha-565823)   </devices>
	I1211 23:58:50.609412  106017 main.go:141] libmachine: (ha-565823) </domain>
	I1211 23:58:50.609425  106017 main.go:141] libmachine: (ha-565823) 
	I1211 23:58:50.614253  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:5a:5d:6a in network default
	I1211 23:58:50.614867  106017 main.go:141] libmachine: (ha-565823) Ensuring networks are active...
	I1211 23:58:50.614888  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:50.615542  106017 main.go:141] libmachine: (ha-565823) Ensuring network default is active
	I1211 23:58:50.615828  106017 main.go:141] libmachine: (ha-565823) Ensuring network mk-ha-565823 is active
	I1211 23:58:50.616242  106017 main.go:141] libmachine: (ha-565823) Getting domain xml...
	I1211 23:58:50.616898  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:51.817451  106017 main.go:141] libmachine: (ha-565823) Waiting to get IP...
	I1211 23:58:51.818184  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:51.818533  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:51.818576  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:51.818514  106042 retry.go:31] will retry after 280.301496ms: waiting for machine to come up
	I1211 23:58:52.100046  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.100502  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.100533  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.100451  106042 retry.go:31] will retry after 276.944736ms: waiting for machine to come up
	I1211 23:58:52.378928  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.379349  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.379382  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.379295  106042 retry.go:31] will retry after 389.022589ms: waiting for machine to come up
	I1211 23:58:52.769835  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.770314  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.770357  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.770269  106042 retry.go:31] will retry after 542.492277ms: waiting for machine to come up
	I1211 23:58:53.313855  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:53.314281  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:53.314305  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:53.314231  106042 retry.go:31] will retry after 742.209465ms: waiting for machine to come up
	I1211 23:58:54.058032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.058453  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.058490  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.058433  106042 retry.go:31] will retry after 754.421967ms: waiting for machine to come up
	I1211 23:58:54.814555  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.814980  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.815017  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.814915  106042 retry.go:31] will retry after 802.576471ms: waiting for machine to come up
	I1211 23:58:55.619852  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:55.620325  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:55.620362  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:55.620271  106042 retry.go:31] will retry after 1.192308346s: waiting for machine to come up
	I1211 23:58:56.815553  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:56.816025  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:56.816050  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:56.815966  106042 retry.go:31] will retry after 1.618860426s: waiting for machine to come up
	I1211 23:58:58.436766  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:58.437231  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:58.437256  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:58.437186  106042 retry.go:31] will retry after 2.219805666s: waiting for machine to come up
	I1211 23:59:00.658607  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:00.659028  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:00.659058  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:00.658968  106042 retry.go:31] will retry after 1.768582626s: waiting for machine to come up
	I1211 23:59:02.429943  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:02.430433  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:02.430464  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:02.430369  106042 retry.go:31] will retry after 2.185532844s: waiting for machine to come up
	I1211 23:59:04.617032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:04.617473  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:04.617499  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:04.617419  106042 retry.go:31] will retry after 4.346976865s: waiting for machine to come up
	I1211 23:59:08.969389  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:08.969741  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:08.969760  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:08.969711  106042 retry.go:31] will retry after 4.969601196s: waiting for machine to come up
	I1211 23:59:13.943658  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944048  106017 main.go:141] libmachine: (ha-565823) Found IP for machine: 192.168.39.19
	I1211 23:59:13.944063  106017 main.go:141] libmachine: (ha-565823) Reserving static IP address...
	I1211 23:59:13.944071  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has current primary IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944392  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "ha-565823", mac: "52:54:00:2b:2e:da", ip: "192.168.39.19"} in network mk-ha-565823
	I1211 23:59:14.015315  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:14.015347  106017 main.go:141] libmachine: (ha-565823) Reserved static IP address: 192.168.39.19
	I1211 23:59:14.015425  106017 main.go:141] libmachine: (ha-565823) Waiting for SSH to be available...
	I1211 23:59:14.017689  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:14.018021  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823
	I1211 23:59:14.018050  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find defined IP address of network mk-ha-565823 interface with MAC address 52:54:00:2b:2e:da
	I1211 23:59:14.018183  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:14.018223  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:14.018268  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:14.018288  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:14.018327  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:14.021958  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: exit status 255: 
	I1211 23:59:14.021983  106017 main.go:141] libmachine: (ha-565823) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1211 23:59:14.021992  106017 main.go:141] libmachine: (ha-565823) DBG | command : exit 0
	I1211 23:59:14.022004  106017 main.go:141] libmachine: (ha-565823) DBG | err     : exit status 255
	I1211 23:59:14.022014  106017 main.go:141] libmachine: (ha-565823) DBG | output  : 
	I1211 23:59:17.023677  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:17.026110  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026503  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.026529  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026696  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:17.026723  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:17.026749  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:17.026776  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:17.026792  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:17.155941  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: <nil>: 
	I1211 23:59:17.156245  106017 main.go:141] libmachine: (ha-565823) KVM machine creation complete!
	I1211 23:59:17.156531  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:17.157110  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157306  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157460  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1211 23:59:17.157473  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:17.158855  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1211 23:59:17.158893  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1211 23:59:17.158902  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1211 23:59:17.158918  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.161015  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161305  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.161347  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161435  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.161600  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161751  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161869  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.162043  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.162241  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.162251  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1211 23:59:17.270900  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.270927  106017 main.go:141] libmachine: Detecting the provisioner...
	I1211 23:59:17.270938  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.273797  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274144  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.274170  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274323  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.274499  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274631  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274743  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.274871  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.275034  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.275045  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1211 23:59:17.388514  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1211 23:59:17.388598  106017 main.go:141] libmachine: found compatible host: buildroot
	I1211 23:59:17.388612  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1211 23:59:17.388622  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.388876  106017 buildroot.go:166] provisioning hostname "ha-565823"
	I1211 23:59:17.388901  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.389119  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.391763  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392089  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.392117  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392206  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.392374  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392583  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392750  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.392900  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.393085  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.393098  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823 && echo "ha-565823" | sudo tee /etc/hostname
	I1211 23:59:17.517872  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1211 23:59:17.517906  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.520794  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521115  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.521139  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521316  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.521505  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521649  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521748  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.521909  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.522131  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.522150  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:59:17.641444  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.641473  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1211 23:59:17.641523  106017 buildroot.go:174] setting up certificates
	I1211 23:59:17.641537  106017 provision.go:84] configureAuth start
	I1211 23:59:17.641550  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.641858  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:17.644632  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.644929  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.644969  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.645145  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.647106  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647440  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.647460  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647633  106017 provision.go:143] copyHostCerts
	I1211 23:59:17.647667  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647703  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1211 23:59:17.647712  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647777  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1211 23:59:17.647854  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647873  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1211 23:59:17.647879  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647903  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1211 23:59:17.647943  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647959  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1211 23:59:17.647965  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647985  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1211 23:59:17.648036  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823 san=[127.0.0.1 192.168.39.19 ha-565823 localhost minikube]
	I1211 23:59:17.803088  106017 provision.go:177] copyRemoteCerts
	I1211 23:59:17.803154  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:59:17.803180  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.806065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806383  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.806401  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806621  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.806836  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.806981  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.807172  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:17.894618  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 23:59:17.894691  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 23:59:17.921956  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 23:59:17.922023  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 23:59:17.948821  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 23:59:17.948890  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1211 23:59:17.975580  106017 provision.go:87] duration metric: took 334.027463ms to configureAuth
	I1211 23:59:17.975634  106017 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:59:17.975827  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:17.975904  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.978577  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.978850  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.978901  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.979082  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.979257  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979385  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979493  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.979692  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.979889  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.979912  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:59:18.235267  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:59:18.235313  106017 main.go:141] libmachine: Checking connection to Docker...
	I1211 23:59:18.235325  106017 main.go:141] libmachine: (ha-565823) Calling .GetURL
	I1211 23:59:18.236752  106017 main.go:141] libmachine: (ha-565823) DBG | Using libvirt version 6000000
	I1211 23:59:18.239115  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239502  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.239532  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239731  106017 main.go:141] libmachine: Docker is up and running!
	I1211 23:59:18.239753  106017 main.go:141] libmachine: Reticulating splines...
	I1211 23:59:18.239771  106017 client.go:171] duration metric: took 28.270144196s to LocalClient.Create
	I1211 23:59:18.239864  106017 start.go:167] duration metric: took 28.27029823s to libmachine.API.Create "ha-565823"
	I1211 23:59:18.239885  106017 start.go:293] postStartSetup for "ha-565823" (driver="kvm2")
	I1211 23:59:18.239895  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:59:18.239917  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.240179  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:59:18.240211  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.242164  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242466  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.242493  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242645  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.242832  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.242993  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.243119  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.330660  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:59:18.335424  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1211 23:59:18.335447  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1211 23:59:18.335503  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1211 23:59:18.335574  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1211 23:59:18.335584  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1211 23:59:18.335717  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 23:59:18.346001  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:18.374524  106017 start.go:296] duration metric: took 134.623519ms for postStartSetup
	I1211 23:59:18.374583  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:18.375295  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.377900  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378234  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.378262  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378516  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:18.378710  106017 start.go:128] duration metric: took 28.427447509s to createHost
	I1211 23:59:18.378738  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.380862  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381196  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.381220  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381358  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.381537  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381691  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381809  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.381919  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:18.382120  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:18.382133  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:59:18.492450  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961558.472734336
	
	I1211 23:59:18.492473  106017 fix.go:216] guest clock: 1733961558.472734336
	I1211 23:59:18.492480  106017 fix.go:229] Guest: 2024-12-11 23:59:18.472734336 +0000 UTC Remote: 2024-12-11 23:59:18.378724497 +0000 UTC m=+28.540551547 (delta=94.009839ms)
	I1211 23:59:18.492521  106017 fix.go:200] guest clock delta is within tolerance: 94.009839ms
	I1211 23:59:18.492529  106017 start.go:83] releasing machines lock for "ha-565823", held for 28.541373742s
	I1211 23:59:18.492553  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.492820  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.495388  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495716  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.495743  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495888  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496371  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496534  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496615  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:59:18.496654  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.496714  106017 ssh_runner.go:195] Run: cat /version.json
	I1211 23:59:18.496740  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.499135  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499486  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499548  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499569  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499675  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.499845  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.499921  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499961  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499985  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500123  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.500135  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.500278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.500460  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500604  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.607330  106017 ssh_runner.go:195] Run: systemctl --version
	I1211 23:59:18.613387  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:59:18.776622  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:59:18.783443  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:59:18.783538  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:59:18.799688  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:59:18.799713  106017 start.go:495] detecting cgroup driver to use...
	I1211 23:59:18.799774  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:59:18.816025  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:59:18.830854  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:59:18.830908  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:59:18.845980  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:59:18.860893  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:59:18.978441  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:59:19.134043  106017 docker.go:233] disabling docker service ...
	I1211 23:59:19.134112  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:59:19.149156  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:59:19.162275  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:59:19.283529  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:59:19.409189  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:59:19.423558  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:59:19.442528  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:59:19.442599  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.453566  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:59:19.453654  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.464397  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.475199  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.486049  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:59:19.497021  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.507803  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.524919  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.535844  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:59:19.545546  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:59:19.545598  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:59:19.559407  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:59:19.569383  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:19.689090  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:59:19.791744  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:59:19.791811  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:59:19.796877  106017 start.go:563] Will wait 60s for crictl version
	I1211 23:59:19.796945  106017 ssh_runner.go:195] Run: which crictl
	I1211 23:59:19.801083  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:59:19.845670  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:59:19.845758  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.875253  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.904311  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1211 23:59:19.906690  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:19.909356  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.909726  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:19.909755  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.910412  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:59:19.915735  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:19.929145  106017 kubeadm.go:883] updating cluster {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:59:19.929263  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:19.929323  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:19.962567  106017 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1211 23:59:19.962636  106017 ssh_runner.go:195] Run: which lz4
	I1211 23:59:19.966688  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1211 23:59:19.966797  106017 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:59:19.970897  106017 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:59:19.970929  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1211 23:59:21.360986  106017 crio.go:462] duration metric: took 1.394221262s to copy over tarball
	I1211 23:59:21.361088  106017 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:59:23.449972  106017 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088850329s)
	I1211 23:59:23.450033  106017 crio.go:469] duration metric: took 2.08900198s to extract the tarball
	I1211 23:59:23.450045  106017 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:59:23.487452  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:23.534823  106017 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:59:23.534855  106017 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:59:23.534866  106017 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.2 crio true true} ...
	I1211 23:59:23.535012  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:59:23.535085  106017 ssh_runner.go:195] Run: crio config
	I1211 23:59:23.584878  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:23.584896  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:23.584905  106017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:59:23.584925  106017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565823 NodeName:ha-565823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:59:23.585039  106017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:59:23.585064  106017 kube-vip.go:115] generating kube-vip config ...
	I1211 23:59:23.585112  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1211 23:59:23.603981  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1211 23:59:23.604115  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1211 23:59:23.604182  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:59:23.614397  106017 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:59:23.614477  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1211 23:59:23.624289  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1211 23:59:23.641517  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:59:23.658716  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1211 23:59:23.675660  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1211 23:59:23.692530  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1211 23:59:23.696599  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:23.709445  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:23.845220  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:59:23.862954  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.19
	I1211 23:59:23.862981  106017 certs.go:194] generating shared ca certs ...
	I1211 23:59:23.863000  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:23.863207  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1211 23:59:23.863251  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1211 23:59:23.863262  106017 certs.go:256] generating profile certs ...
	I1211 23:59:23.863328  106017 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1211 23:59:23.863357  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt with IP's: []
	I1211 23:59:24.110700  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt ...
	I1211 23:59:24.110730  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt: {Name:mk50d526eb9350fec1f3c58be1ef98b2039770b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.110932  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key ...
	I1211 23:59:24.110948  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key: {Name:mk947a896656d347feed0e5ddd7c2c37edce03fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.111050  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c
	I1211 23:59:24.111082  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I1211 23:59:24.333387  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c ...
	I1211 23:59:24.333420  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c: {Name:mkfc61798e61cb1d7ac0b35769a3179525ca368b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333599  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c ...
	I1211 23:59:24.333627  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c: {Name:mk4a04314c10f352160875e4af47370a91a0db88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333740  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1211 23:59:24.333840  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1211 23:59:24.333924  106017 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1211 23:59:24.333944  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt with IP's: []
	I1211 23:59:24.464961  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt ...
	I1211 23:59:24.464993  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt: {Name:mkbb1cf3b9047082cee6fcd6adaa9509e1729b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465183  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key ...
	I1211 23:59:24.465203  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key: {Name:mkc9ec571078b7167489918f5cf8f1ea61967aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465319  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 23:59:24.465348  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 23:59:24.465364  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 23:59:24.465387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 23:59:24.465405  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 23:59:24.465422  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 23:59:24.465435  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 23:59:24.465452  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 23:59:24.465528  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1211 23:59:24.465577  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1211 23:59:24.465592  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:59:24.465634  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1211 23:59:24.465664  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:59:24.465695  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1211 23:59:24.465752  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:24.465790  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.465812  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.465831  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.466545  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:59:24.494141  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:59:24.519556  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:59:24.544702  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:59:24.569766  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 23:59:24.595380  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:59:24.621226  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:59:24.649860  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 23:59:24.698075  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1211 23:59:24.728714  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:59:24.753139  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1211 23:59:24.777957  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:59:24.796289  106017 ssh_runner.go:195] Run: openssl version
	I1211 23:59:24.802883  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1211 23:59:24.816553  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821741  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821804  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.828574  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 23:59:24.840713  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:59:24.853013  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858281  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858331  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.864829  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:59:24.875963  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1211 23:59:24.886500  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891673  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891726  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.898344  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1211 23:59:24.910633  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:59:24.915220  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:59:24.915279  106017 kubeadm.go:392] StartCluster: {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:59:24.915383  106017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:59:24.915454  106017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:59:24.954743  106017 cri.go:89] found id: ""
	I1211 23:59:24.954813  106017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:59:24.965887  106017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:59:24.975963  106017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:59:24.985759  106017 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:59:24.985784  106017 kubeadm.go:157] found existing configuration files:
	
	I1211 23:59:24.985837  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:59:24.995322  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:59:24.995387  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:59:25.005782  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:59:25.015121  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:59:25.015216  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:59:25.024739  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.033898  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:59:25.033949  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.043527  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:59:25.052795  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:59:25.052860  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:59:25.063719  106017 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:59:25.172138  106017 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:59:25.172231  106017 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:59:25.282095  106017 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:59:25.282220  106017 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:59:25.282346  106017 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:59:25.292987  106017 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:59:25.507248  106017 out.go:235]   - Generating certificates and keys ...
	I1211 23:59:25.507374  106017 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:59:25.507500  106017 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:59:25.628233  106017 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:59:25.895094  106017 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:59:26.195266  106017 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:59:26.355531  106017 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:59:26.415298  106017 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:59:26.415433  106017 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.603280  106017 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:59:26.603516  106017 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.737544  106017 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:59:26.938736  106017 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:59:27.118447  106017 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:59:27.118579  106017 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:59:27.214058  106017 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:59:27.283360  106017 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:59:27.437118  106017 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:59:27.583693  106017 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:59:27.738001  106017 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:59:27.738673  106017 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:59:27.741933  106017 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:59:27.743702  106017 out.go:235]   - Booting up control plane ...
	I1211 23:59:27.743844  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:59:27.744424  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:59:27.746935  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:59:27.765392  106017 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:59:27.772566  106017 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:59:27.772699  106017 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:59:27.925671  106017 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:59:27.925813  106017 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:59:28.450340  106017 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 524.075614ms
	I1211 23:59:28.450451  106017 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:59:34.524805  106017 kubeadm.go:310] [api-check] The API server is healthy after 6.076898322s
	I1211 23:59:34.537381  106017 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:59:34.553285  106017 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:59:35.079814  106017 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:59:35.080057  106017 kubeadm.go:310] [mark-control-plane] Marking the node ha-565823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:59:35.095582  106017 kubeadm.go:310] [bootstrap-token] Using token: lktsit.hvyjnx8elfe20z7f
	I1211 23:59:35.097027  106017 out.go:235]   - Configuring RBAC rules ...
	I1211 23:59:35.097177  106017 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:59:35.101780  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:59:35.113593  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:59:35.118164  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:59:35.121511  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:59:35.125148  106017 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:59:35.144131  106017 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:59:35.407109  106017 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:59:35.930699  106017 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:59:35.931710  106017 kubeadm.go:310] 
	I1211 23:59:35.931771  106017 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:59:35.931775  106017 kubeadm.go:310] 
	I1211 23:59:35.931851  106017 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:59:35.931859  106017 kubeadm.go:310] 
	I1211 23:59:35.931880  106017 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:59:35.931927  106017 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:59:35.931982  106017 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:59:35.932000  106017 kubeadm.go:310] 
	I1211 23:59:35.932049  106017 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:59:35.932058  106017 kubeadm.go:310] 
	I1211 23:59:35.932118  106017 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:59:35.932126  106017 kubeadm.go:310] 
	I1211 23:59:35.932168  106017 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:59:35.932259  106017 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:59:35.932333  106017 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:59:35.932350  106017 kubeadm.go:310] 
	I1211 23:59:35.932432  106017 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:59:35.932499  106017 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:59:35.932506  106017 kubeadm.go:310] 
	I1211 23:59:35.932579  106017 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.932666  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1211 23:59:35.932687  106017 kubeadm.go:310] 	--control-plane 
	I1211 23:59:35.932692  106017 kubeadm.go:310] 
	I1211 23:59:35.932780  106017 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:59:35.932793  106017 kubeadm.go:310] 
	I1211 23:59:35.932900  106017 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.933031  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1211 23:59:35.933914  106017 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:59:35.934034  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:35.934056  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:35.936050  106017 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1211 23:59:35.937506  106017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:59:35.943577  106017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1211 23:59:35.943610  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:59:35.964609  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:59:36.354699  106017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:59:36.354799  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:36.354832  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823 minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=true
	I1211 23:59:36.386725  106017 ops.go:34] apiserver oom_adj: -16
	I1211 23:59:36.511318  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.011972  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.511719  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.012059  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.511637  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.012451  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.512222  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.012218  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.512204  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.605442  106017 kubeadm.go:1113] duration metric: took 4.250718988s to wait for elevateKubeSystemPrivileges
	I1211 23:59:40.605479  106017 kubeadm.go:394] duration metric: took 15.690206878s to StartCluster
	I1211 23:59:40.605505  106017 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.605593  106017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.606578  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.606860  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:59:40.606860  106017 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:40.606883  106017 start.go:241] waiting for startup goroutines ...
	I1211 23:59:40.606899  106017 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 23:59:40.606982  106017 addons.go:69] Setting storage-provisioner=true in profile "ha-565823"
	I1211 23:59:40.606989  106017 addons.go:69] Setting default-storageclass=true in profile "ha-565823"
	I1211 23:59:40.607004  106017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565823"
	I1211 23:59:40.607018  106017 addons.go:234] Setting addon storage-provisioner=true in "ha-565823"
	I1211 23:59:40.607045  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.607426  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607469  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.607635  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:40.607793  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607838  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.622728  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1211 23:59:40.622807  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1211 23:59:40.623266  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623370  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623966  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.623993  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624004  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.624015  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624390  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624398  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624567  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.624920  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.624961  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.626695  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.627009  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 23:59:40.627499  106017 cert_rotation.go:140] Starting client certificate rotation controller
	I1211 23:59:40.627813  106017 addons.go:234] Setting addon default-storageclass=true in "ha-565823"
	I1211 23:59:40.627859  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.628133  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.628177  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.640869  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I1211 23:59:40.641437  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.642016  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.642043  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.642434  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.642635  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.643106  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I1211 23:59:40.643674  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.644240  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.644275  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.644588  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.644640  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.645087  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.645136  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.646489  106017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:59:40.647996  106017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.648015  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:59:40.648030  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.651165  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651679  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.651703  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651939  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.652136  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.652353  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.652515  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.661089  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I1211 23:59:40.661521  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.661949  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.661970  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.662302  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.662464  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.664023  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.664204  106017 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:40.664219  106017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:59:40.664234  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.666799  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667194  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.667218  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667366  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.667518  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.667676  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.667787  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.766556  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:59:40.838934  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.853931  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:41.384410  106017 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:59:41.687789  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.687839  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688024  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688044  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688143  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688158  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688166  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688175  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688183  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688295  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688316  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688337  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688398  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688424  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688407  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688511  106017 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 23:59:41.688531  106017 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 23:59:41.688635  106017 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1211 23:59:41.688642  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.688654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.688660  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.689067  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.689084  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.689112  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.703120  106017 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1211 23:59:41.703858  106017 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1211 23:59:41.703876  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.703888  106017 round_trippers.go:473]     Content-Type: application/json
	I1211 23:59:41.703896  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.703902  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.707451  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1211 23:59:41.707880  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.707905  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.708200  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.708289  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.708309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.710098  106017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1211 23:59:41.711624  106017 addons.go:510] duration metric: took 1.104728302s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1211 23:59:41.711657  106017 start.go:246] waiting for cluster config update ...
	I1211 23:59:41.711669  106017 start.go:255] writing updated cluster config ...
	I1211 23:59:41.713334  106017 out.go:201] 
	I1211 23:59:41.714788  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:41.714856  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.716555  106017 out.go:177] * Starting "ha-565823-m02" control-plane node in "ha-565823" cluster
	I1211 23:59:41.717794  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:41.717815  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:59:41.717923  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:59:41.717935  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:59:41.717999  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.718156  106017 start.go:360] acquireMachinesLock for ha-565823-m02: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:59:41.718199  106017 start.go:364] duration metric: took 25.794µs to acquireMachinesLock for "ha-565823-m02"
	I1211 23:59:41.718224  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:41.718291  106017 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1211 23:59:41.719692  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:59:41.719777  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:41.719812  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:41.734465  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1211 23:59:41.734950  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:41.735455  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:41.735478  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:41.735843  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:41.736006  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1211 23:59:41.736149  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1211 23:59:41.736349  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:59:41.736395  106017 client.go:168] LocalClient.Create starting
	I1211 23:59:41.736425  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:59:41.736455  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736469  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736519  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:59:41.736537  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736547  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736559  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:59:41.736567  106017 main.go:141] libmachine: (ha-565823-m02) Calling .PreCreateCheck
	I1211 23:59:41.736735  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1211 23:59:41.737076  106017 main.go:141] libmachine: Creating machine...
	I1211 23:59:41.737091  106017 main.go:141] libmachine: (ha-565823-m02) Calling .Create
	I1211 23:59:41.737203  106017 main.go:141] libmachine: (ha-565823-m02) Creating KVM machine...
	I1211 23:59:41.738412  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing default KVM network
	I1211 23:59:41.738502  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing private KVM network mk-ha-565823
	I1211 23:59:41.738691  106017 main.go:141] libmachine: (ha-565823-m02) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:41.738735  106017 main.go:141] libmachine: (ha-565823-m02) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:59:41.738778  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:41.738685  106399 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:41.738888  106017 main.go:141] libmachine: (ha-565823-m02) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:59:42.010827  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.010671  106399 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa...
	I1211 23:59:42.081269  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081125  106399 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk...
	I1211 23:59:42.081297  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing magic tar header
	I1211 23:59:42.081315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing SSH key tar header
	I1211 23:59:42.081327  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081241  106399 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:42.081337  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02
	I1211 23:59:42.081349  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:59:42.081395  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 (perms=drwx------)
	I1211 23:59:42.081428  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:59:42.081445  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:42.081465  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:59:42.081477  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:59:42.081489  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:59:42.081497  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home
	I1211 23:59:42.081510  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:59:42.081524  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:59:42.081536  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Skipping /home - not owner
	I1211 23:59:42.081553  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:59:42.081564  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:59:42.081577  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:42.082570  106017 main.go:141] libmachine: (ha-565823-m02) define libvirt domain using xml: 
	I1211 23:59:42.082593  106017 main.go:141] libmachine: (ha-565823-m02) <domain type='kvm'>
	I1211 23:59:42.082600  106017 main.go:141] libmachine: (ha-565823-m02)   <name>ha-565823-m02</name>
	I1211 23:59:42.082605  106017 main.go:141] libmachine: (ha-565823-m02)   <memory unit='MiB'>2200</memory>
	I1211 23:59:42.082610  106017 main.go:141] libmachine: (ha-565823-m02)   <vcpu>2</vcpu>
	I1211 23:59:42.082618  106017 main.go:141] libmachine: (ha-565823-m02)   <features>
	I1211 23:59:42.082626  106017 main.go:141] libmachine: (ha-565823-m02)     <acpi/>
	I1211 23:59:42.082641  106017 main.go:141] libmachine: (ha-565823-m02)     <apic/>
	I1211 23:59:42.082671  106017 main.go:141] libmachine: (ha-565823-m02)     <pae/>
	I1211 23:59:42.082693  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.082705  106017 main.go:141] libmachine: (ha-565823-m02)   </features>
	I1211 23:59:42.082719  106017 main.go:141] libmachine: (ha-565823-m02)   <cpu mode='host-passthrough'>
	I1211 23:59:42.082728  106017 main.go:141] libmachine: (ha-565823-m02)   
	I1211 23:59:42.082736  106017 main.go:141] libmachine: (ha-565823-m02)   </cpu>
	I1211 23:59:42.082744  106017 main.go:141] libmachine: (ha-565823-m02)   <os>
	I1211 23:59:42.082754  106017 main.go:141] libmachine: (ha-565823-m02)     <type>hvm</type>
	I1211 23:59:42.082761  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='cdrom'/>
	I1211 23:59:42.082771  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='hd'/>
	I1211 23:59:42.082779  106017 main.go:141] libmachine: (ha-565823-m02)     <bootmenu enable='no'/>
	I1211 23:59:42.082792  106017 main.go:141] libmachine: (ha-565823-m02)   </os>
	I1211 23:59:42.082803  106017 main.go:141] libmachine: (ha-565823-m02)   <devices>
	I1211 23:59:42.082811  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='cdrom'>
	I1211 23:59:42.082828  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/boot2docker.iso'/>
	I1211 23:59:42.082836  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hdc' bus='scsi'/>
	I1211 23:59:42.082847  106017 main.go:141] libmachine: (ha-565823-m02)       <readonly/>
	I1211 23:59:42.082857  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082887  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='disk'>
	I1211 23:59:42.082908  106017 main.go:141] libmachine: (ha-565823-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:59:42.082928  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk'/>
	I1211 23:59:42.082944  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hda' bus='virtio'/>
	I1211 23:59:42.082957  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082968  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.082978  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='mk-ha-565823'/>
	I1211 23:59:42.082985  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.082990  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.082997  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.083003  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='default'/>
	I1211 23:59:42.083012  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.083025  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.083038  106017 main.go:141] libmachine: (ha-565823-m02)     <serial type='pty'>
	I1211 23:59:42.083047  106017 main.go:141] libmachine: (ha-565823-m02)       <target port='0'/>
	I1211 23:59:42.083054  106017 main.go:141] libmachine: (ha-565823-m02)     </serial>
	I1211 23:59:42.083065  106017 main.go:141] libmachine: (ha-565823-m02)     <console type='pty'>
	I1211 23:59:42.083077  106017 main.go:141] libmachine: (ha-565823-m02)       <target type='serial' port='0'/>
	I1211 23:59:42.083089  106017 main.go:141] libmachine: (ha-565823-m02)     </console>
	I1211 23:59:42.083098  106017 main.go:141] libmachine: (ha-565823-m02)     <rng model='virtio'>
	I1211 23:59:42.083112  106017 main.go:141] libmachine: (ha-565823-m02)       <backend model='random'>/dev/random</backend>
	I1211 23:59:42.083126  106017 main.go:141] libmachine: (ha-565823-m02)     </rng>
	I1211 23:59:42.083154  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083172  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083184  106017 main.go:141] libmachine: (ha-565823-m02)   </devices>
	I1211 23:59:42.083193  106017 main.go:141] libmachine: (ha-565823-m02) </domain>
	I1211 23:59:42.083206  106017 main.go:141] libmachine: (ha-565823-m02) 
	I1211 23:59:42.090031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:4e:60:e6 in network default
	I1211 23:59:42.090722  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring networks are active...
	I1211 23:59:42.090744  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:42.091386  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network default is active
	I1211 23:59:42.091728  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network mk-ha-565823 is active
	I1211 23:59:42.092172  106017 main.go:141] libmachine: (ha-565823-m02) Getting domain xml...
	I1211 23:59:42.092821  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:43.306722  106017 main.go:141] libmachine: (ha-565823-m02) Waiting to get IP...
	I1211 23:59:43.307541  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.307970  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.308021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.307943  106399 retry.go:31] will retry after 188.292611ms: waiting for machine to come up
	I1211 23:59:43.498538  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.498980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.499007  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.498936  106399 retry.go:31] will retry after 383.283577ms: waiting for machine to come up
	I1211 23:59:43.883676  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.884158  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.884186  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.884123  106399 retry.go:31] will retry after 368.673726ms: waiting for machine to come up
	I1211 23:59:44.254720  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.255182  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.255205  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.255142  106399 retry.go:31] will retry after 403.445822ms: waiting for machine to come up
	I1211 23:59:44.660664  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.661153  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.661178  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.661074  106399 retry.go:31] will retry after 718.942978ms: waiting for machine to come up
	I1211 23:59:45.382183  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:45.382736  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:45.382761  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:45.382694  106399 retry.go:31] will retry after 941.806671ms: waiting for machine to come up
	I1211 23:59:46.326070  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:46.326533  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:46.326566  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:46.326481  106399 retry.go:31] will retry after 1.01864437s: waiting for machine to come up
	I1211 23:59:47.347315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:47.347790  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:47.347812  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:47.347737  106399 retry.go:31] will retry after 1.213138s: waiting for machine to come up
	I1211 23:59:48.562238  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:48.562705  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:48.562737  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:48.562658  106399 retry.go:31] will retry after 1.846591325s: waiting for machine to come up
	I1211 23:59:50.410650  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:50.411116  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:50.411143  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:50.411072  106399 retry.go:31] will retry after 2.02434837s: waiting for machine to come up
	I1211 23:59:52.436763  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:52.437247  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:52.437276  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:52.437194  106399 retry.go:31] will retry after 1.785823174s: waiting for machine to come up
	I1211 23:59:54.224640  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:54.224948  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:54.224975  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:54.224901  106399 retry.go:31] will retry after 2.203569579s: waiting for machine to come up
	I1211 23:59:56.431378  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:56.431904  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:56.431933  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:56.431858  106399 retry.go:31] will retry after 3.94903919s: waiting for machine to come up
	I1212 00:00:00.384703  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:00.385175  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1212 00:00:00.385208  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1212 00:00:00.385121  106399 retry.go:31] will retry after 3.809627495s: waiting for machine to come up
	I1212 00:00:04.197607  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198181  106017 main.go:141] libmachine: (ha-565823-m02) Found IP for machine: 192.168.39.103
	I1212 00:00:04.198204  106017 main.go:141] libmachine: (ha-565823-m02) Reserving static IP address...
	I1212 00:00:04.198220  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has current primary IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198616  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find host DHCP lease matching {name: "ha-565823-m02", mac: "52:54:00:cc:31:80", ip: "192.168.39.103"} in network mk-ha-565823
	I1212 00:00:04.273114  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Getting to WaitForSSH function...
	I1212 00:00:04.273143  106017 main.go:141] libmachine: (ha-565823-m02) Reserved static IP address: 192.168.39.103
	I1212 00:00:04.273155  106017 main.go:141] libmachine: (ha-565823-m02) Waiting for SSH to be available...
	I1212 00:00:04.275998  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276409  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.276438  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276561  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH client type: external
	I1212 00:00:04.276592  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa (-rw-------)
	I1212 00:00:04.276623  106017 main.go:141] libmachine: (ha-565823-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:00:04.276639  106017 main.go:141] libmachine: (ha-565823-m02) DBG | About to run SSH command:
	I1212 00:00:04.276655  106017 main.go:141] libmachine: (ha-565823-m02) DBG | exit 0
	I1212 00:00:04.400102  106017 main.go:141] libmachine: (ha-565823-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 00:00:04.400348  106017 main.go:141] libmachine: (ha-565823-m02) KVM machine creation complete!
	I1212 00:00:04.400912  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:04.401484  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401664  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401821  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:00:04.401837  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetState
	I1212 00:00:04.403174  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:00:04.403192  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:00:04.403199  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:00:04.403208  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.405388  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405786  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.405820  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405928  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.406109  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406313  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406472  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.406636  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.406846  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.406860  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:00:04.507379  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.507409  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:00:04.507426  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.510219  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510595  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.510633  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510776  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.511014  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511172  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511323  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.511507  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.511752  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.511765  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:00:04.612413  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:00:04.612516  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:00:04.612530  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:00:04.612538  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.612840  106017 buildroot.go:166] provisioning hostname "ha-565823-m02"
	I1212 00:00:04.612874  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.613079  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.615872  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616272  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.616326  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616447  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.616621  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616780  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616976  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.617134  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.617294  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.617306  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m02 && echo "ha-565823-m02" | sudo tee /etc/hostname
	I1212 00:00:04.736911  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m02
	
	I1212 00:00:04.736949  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.739899  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740287  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.740321  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740530  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.740723  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.740885  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.741022  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.741259  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.741462  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.741481  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:00:04.854133  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.854171  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:00:04.854189  106017 buildroot.go:174] setting up certificates
	I1212 00:00:04.854199  106017 provision.go:84] configureAuth start
	I1212 00:00:04.854213  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.854617  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:04.858031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858466  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.858492  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858772  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.860980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.861344  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861482  106017 provision.go:143] copyHostCerts
	I1212 00:00:04.861512  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861546  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:00:04.861556  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861621  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:00:04.861699  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861718  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:00:04.861725  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861748  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:00:04.861792  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861809  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:00:04.861815  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861836  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:00:04.861892  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m02 san=[127.0.0.1 192.168.39.103 ha-565823-m02 localhost minikube]
	I1212 00:00:05.017387  106017 provision.go:177] copyRemoteCerts
	I1212 00:00:05.017447  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:00:05.017475  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.020320  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020751  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.020781  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020994  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.021285  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.021461  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.021631  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.103134  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:00:05.103225  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:00:05.128318  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:00:05.128392  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:00:05.152814  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:00:05.152893  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:00:05.177479  106017 provision.go:87] duration metric: took 323.264224ms to configureAuth
	I1212 00:00:05.177509  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:00:05.177674  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:05.177748  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.180791  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181249  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.181280  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181463  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.181702  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.181870  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.182010  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.182176  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.182341  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.182357  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:00:05.417043  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:00:05.417067  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:00:05.417075  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetURL
	I1212 00:00:05.418334  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using libvirt version 6000000
	I1212 00:00:05.420596  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.420905  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.420938  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.421114  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:00:05.421129  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:00:05.421139  106017 client.go:171] duration metric: took 23.684732891s to LocalClient.Create
	I1212 00:00:05.421170  106017 start.go:167] duration metric: took 23.684823561s to libmachine.API.Create "ha-565823"
	I1212 00:00:05.421183  106017 start.go:293] postStartSetup for "ha-565823-m02" (driver="kvm2")
	I1212 00:00:05.421197  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:00:05.421214  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.421468  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:00:05.421495  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.424694  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425050  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.425083  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425238  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.425449  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.425599  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.425739  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.506562  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:00:05.511891  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:00:05.511921  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:00:05.512000  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:00:05.512114  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:00:05.512128  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:00:05.512236  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:00:05.525426  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:05.552318  106017 start.go:296] duration metric: took 131.1154ms for postStartSetup
	I1212 00:00:05.552386  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:05.553038  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.556173  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556661  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.556704  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556972  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:05.557179  106017 start.go:128] duration metric: took 23.838875142s to createHost
	I1212 00:00:05.557206  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.559644  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560000  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.560021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560242  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.560469  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560659  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560833  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.561033  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.561234  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.561248  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:00:05.664479  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961605.636878321
	
	I1212 00:00:05.664504  106017 fix.go:216] guest clock: 1733961605.636878321
	I1212 00:00:05.664511  106017 fix.go:229] Guest: 2024-12-12 00:00:05.636878321 +0000 UTC Remote: 2024-12-12 00:00:05.557193497 +0000 UTC m=+75.719020541 (delta=79.684824ms)
	I1212 00:00:05.664529  106017 fix.go:200] guest clock delta is within tolerance: 79.684824ms
	I1212 00:00:05.664536  106017 start.go:83] releasing machines lock for "ha-565823-m02", held for 23.946326821s
	I1212 00:00:05.664559  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.664834  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.667309  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.667587  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.667625  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.670169  106017 out.go:177] * Found network options:
	I1212 00:00:05.671775  106017 out.go:177]   - NO_PROXY=192.168.39.19
	W1212 00:00:05.673420  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.673451  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.673974  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674184  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674310  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:00:05.674362  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	W1212 00:00:05.674404  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.674488  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:00:05.674510  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.677209  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677558  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.677588  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677632  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677782  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.677967  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678067  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.678094  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.678133  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678286  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.678288  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.678440  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678560  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678668  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.906824  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:00:05.913945  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:00:05.914026  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:00:05.931775  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:00:05.931797  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:00:05.931857  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:00:05.948556  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:00:05.963326  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:00:05.963397  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:00:05.978208  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:00:05.992483  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:00:06.103988  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:00:06.275509  106017 docker.go:233] disabling docker service ...
	I1212 00:00:06.275580  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:00:06.293042  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:00:06.306048  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:00:06.431702  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:00:06.557913  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:00:06.573066  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:00:06.592463  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:00:06.592536  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.604024  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:00:06.604087  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.615267  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.626194  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.637083  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:00:06.648061  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.659477  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.677134  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.687875  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:00:06.701376  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:00:06.701451  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:00:06.714621  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:00:06.724651  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:06.844738  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:00:06.941123  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:00:06.941186  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:00:06.946025  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:00:06.946103  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:00:06.950454  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:00:06.989220  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:00:06.989302  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.018407  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.049375  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:00:07.051430  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:00:07.052588  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:07.055087  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055359  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:07.055377  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055577  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:00:07.059718  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:07.072121  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:00:07.072328  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:07.072649  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.072692  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.087345  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I1212 00:00:07.087790  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.088265  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.088285  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.088623  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.088818  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:00:07.090394  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:07.090786  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.090832  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.107441  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I1212 00:00:07.107836  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.108308  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.108327  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.108632  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.108786  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:07.108915  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.103
	I1212 00:00:07.108926  106017 certs.go:194] generating shared ca certs ...
	I1212 00:00:07.108939  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.109062  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:00:07.109105  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:00:07.109114  106017 certs.go:256] generating profile certs ...
	I1212 00:00:07.109178  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:00:07.109202  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc
	I1212 00:00:07.109217  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.254]
	I1212 00:00:07.203114  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc ...
	I1212 00:00:07.203150  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc: {Name:mk3a75c055b0a829a056d90903c78ae5decf9bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203349  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc ...
	I1212 00:00:07.203372  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc: {Name:mkce850d5486843203391b76609d5fd65c614c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203468  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:00:07.203647  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:00:07.203815  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:00:07.203836  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:00:07.203855  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:00:07.203870  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:00:07.203891  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:00:07.203909  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:00:07.203931  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:00:07.203949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:00:07.203968  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:00:07.204035  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:00:07.204078  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:00:07.204113  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:00:07.204170  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:00:07.204217  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:00:07.204255  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:00:07.204310  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:07.204351  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.204383  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.204402  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.204445  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:07.207043  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207413  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:07.207439  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207647  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:07.207863  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:07.208027  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:07.208177  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:07.288012  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:00:07.293204  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:00:07.304789  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:00:07.310453  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:00:07.321124  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:00:07.326057  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:00:07.337737  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:00:07.342691  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:00:07.354806  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:00:07.359143  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:00:07.371799  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:00:07.376295  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:00:07.387705  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:00:07.415288  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:00:07.440414  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:00:07.466177  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:00:07.490907  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 00:00:07.517228  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:00:07.542858  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:00:07.567465  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:00:07.592181  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:00:07.616218  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:00:07.641063  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:00:07.665682  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:00:07.683443  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:00:07.700820  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:00:07.718283  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:00:07.735173  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:00:07.752079  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:00:07.770479  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:00:07.789102  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:00:07.795248  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:00:07.806811  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811750  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811816  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.818034  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:00:07.829409  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:00:07.840952  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845782  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845853  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.851849  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:00:07.863158  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:00:07.875091  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880111  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880173  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.886325  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:00:07.897750  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:00:07.902056  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:00:07.902131  106017 kubeadm.go:934] updating node {m02 192.168.39.103 8443 v1.31.2 crio true true} ...
	I1212 00:00:07.902244  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:00:07.902279  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:00:07.902323  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:00:07.920010  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:00:07.920099  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:00:07.920166  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.930159  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:00:07.930221  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.939751  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:00:07.939776  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939831  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939835  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1212 00:00:07.939861  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1212 00:00:07.944054  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:00:07.944086  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:00:09.149265  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:09.168056  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.168181  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.173566  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:00:09.173601  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:00:09.219150  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.219238  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.234545  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:00:09.234589  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:00:09.726465  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:00:09.736811  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1212 00:00:09.753799  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:00:09.771455  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:00:09.789916  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:00:09.794008  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:09.807290  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:09.944370  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:09.973225  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:09.973893  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:09.973959  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:09.989196  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I1212 00:00:09.989723  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:09.990363  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:09.990386  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:09.990735  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:09.990931  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:09.991104  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:00:09.991225  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:00:09.991249  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:09.994437  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995018  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:09.995065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995202  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:09.995448  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:09.995585  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:09.995765  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:10.156968  106017 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:10.157029  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443"
	I1212 00:00:31.347275  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443": (21.190211224s)
	I1212 00:00:31.347321  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:00:31.826934  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m02 minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:00:32.001431  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:00:32.141631  106017 start.go:319] duration metric: took 22.150523355s to joinCluster
	I1212 00:00:32.141725  106017 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:32.141997  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:32.143552  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:00:32.145227  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:32.332043  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:32.348508  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:00:32.348864  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:00:32.348951  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:00:32.349295  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:32.349423  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.349436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.349449  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.349460  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.362203  106017 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 00:00:32.850412  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.850436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.850447  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.850455  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.854786  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.349683  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.349718  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.354356  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.849742  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.849766  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.849774  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.849778  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.854313  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.350516  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.350539  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.350547  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.350551  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.355023  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.355775  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:34.850173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.850197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.850206  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.850210  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.853276  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.350529  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.350560  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.350568  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.350574  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.354219  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.850352  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.850378  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.850386  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.850391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.853507  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.349531  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.349555  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.349566  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.349572  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.353110  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.849604  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.849629  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.849640  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.849645  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.856046  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:36.856697  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:37.349961  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.349980  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.349989  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.349993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.354377  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:37.849622  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.849647  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.849660  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.849665  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.853494  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:38.349611  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.349641  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.349654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.349686  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.354211  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:38.850399  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.850424  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.850434  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.850440  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.854312  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.350249  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.350275  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.350288  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.350293  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.354293  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.355152  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:39.849553  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.849578  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.849587  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.849592  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.854321  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:40.350406  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.350438  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.350450  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.350456  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.354039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:40.850576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.850604  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.850615  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.850620  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.854393  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.349882  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.349908  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.349919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.349925  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.353612  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.849701  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.849723  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.849732  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.849737  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.852781  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.853447  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:42.349592  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.349615  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.349624  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.349629  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.352747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:42.849858  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.849881  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.849889  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.849894  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.853198  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.350237  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.350265  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.350274  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.350278  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.353850  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.850187  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.850215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.850227  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.850232  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.853783  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.854292  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:44.349681  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.349719  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.353562  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:44.849731  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.849764  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.849775  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.849783  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.853689  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.349741  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.349768  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.349777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.349781  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.353601  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.849492  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.849515  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.849524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.849528  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.853061  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:46.349543  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.349573  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.349584  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.349589  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.352599  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:46.353168  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:46.850149  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.850169  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.850177  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.850182  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.854205  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:47.350169  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.350191  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.350200  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.350206  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.353664  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:47.849752  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.849780  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.849793  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.849798  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.853354  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.350356  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.350379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.350387  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.350391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.353938  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.354537  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:48.849794  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.849820  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.849829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.849834  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.853163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.350186  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.350215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.350224  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.350229  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.353713  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.849652  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.849676  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.849684  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.849687  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.853033  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.350113  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.350142  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.350153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.350159  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.353742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.849593  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.849613  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.849621  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.849624  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.852952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.853510  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:51.349926  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.349948  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.349957  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.349963  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.353301  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:51.849615  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.849638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.849646  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.849655  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.853844  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.350547  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.350572  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.350580  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.350584  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.354248  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.850223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.850252  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.850263  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.850268  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.853470  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.854190  106017 node_ready.go:49] node "ha-565823-m02" has status "Ready":"True"
	I1212 00:00:52.854220  106017 node_ready.go:38] duration metric: took 20.504892955s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:52.854231  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:52.854318  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:52.854327  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.854334  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.854339  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.859106  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.865543  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.865630  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:00:52.865638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.865646  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.865651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.868523  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.869398  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.869413  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.869424  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.869431  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.871831  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.872543  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.872562  106017 pod_ready.go:82] duration metric: took 6.990987ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872571  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872619  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:00:52.872627  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.872633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.872639  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.874818  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.875523  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.875541  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.875551  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.875557  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.877466  106017 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:00:52.878112  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.878131  106017 pod_ready.go:82] duration metric: took 5.554087ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878140  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878190  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:00:52.878197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.878204  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.878211  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.880364  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.880870  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.880885  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.880891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.880895  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.883116  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.883560  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.883576  106017 pod_ready.go:82] duration metric: took 5.430598ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883587  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:00:52.883682  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.883691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.883700  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.886455  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.887079  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.887092  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.887099  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.887104  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.889373  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.889794  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.889810  106017 pod_ready.go:82] duration metric: took 6.198051ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.889825  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.051288  106017 request.go:632] Waited for 161.36947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051368  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.051390  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.051401  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.055000  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.251236  106017 request.go:632] Waited for 195.409824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251334  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251344  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.251352  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.251356  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.254773  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.255341  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.255360  106017 pod_ready.go:82] duration metric: took 365.529115ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.255371  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.450696  106017 request.go:632] Waited for 195.24618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450768  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450773  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.450782  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.450788  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.454132  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.650685  106017 request.go:632] Waited for 195.384956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650745  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650751  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.650758  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.650762  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.654400  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.655229  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.655251  106017 pod_ready.go:82] duration metric: took 399.872206ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.655268  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.850267  106017 request.go:632] Waited for 194.898023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850386  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.850398  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.850408  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.853683  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.050714  106017 request.go:632] Waited for 196.358846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050791  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050798  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.050810  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.050821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.056588  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:54.057030  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.057048  106017 pod_ready.go:82] duration metric: took 401.768958ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.057064  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.251122  106017 request.go:632] Waited for 193.98571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251196  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251202  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.251215  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.254477  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.451067  106017 request.go:632] Waited for 195.40262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451162  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451179  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.451188  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.451192  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.455097  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.455639  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.455655  106017 pod_ready.go:82] duration metric: took 398.584366ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.455670  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.650842  106017 request.go:632] Waited for 195.080577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650913  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650919  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.650926  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.650932  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.654798  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.851030  106017 request.go:632] Waited for 195.376895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851100  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851111  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.851123  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.851133  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.854879  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.855493  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.855509  106017 pod_ready.go:82] duration metric: took 399.831743ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.855522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.050825  106017 request.go:632] Waited for 195.216303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050891  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050897  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.050904  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.050910  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.055618  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.250720  106017 request.go:632] Waited for 194.371361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250781  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250786  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.250795  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.250802  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.255100  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.255613  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.255633  106017 pod_ready.go:82] duration metric: took 400.104583ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.255659  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.450909  106017 request.go:632] Waited for 195.147666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450990  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450999  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.451016  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.451026  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.455430  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.650645  106017 request.go:632] Waited for 194.425591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650713  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650719  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.650727  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.650736  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.654680  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:55.655493  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.655512  106017 pod_ready.go:82] duration metric: took 399.840095ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.655522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.850696  106017 request.go:632] Waited for 195.072101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850769  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.850777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.850782  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.855247  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.050354  106017 request.go:632] Waited for 194.294814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050422  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050428  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.050438  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.050441  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.053971  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:56.054426  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:56.054442  106017 pod_ready.go:82] duration metric: took 398.914314ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:56.054455  106017 pod_ready.go:39] duration metric: took 3.200213001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:56.054475  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:00:56.054526  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:00:56.072661  106017 api_server.go:72] duration metric: took 23.930895419s to wait for apiserver process to appear ...
	I1212 00:00:56.072689  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:00:56.072711  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:00:56.077698  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:00:56.077790  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:00:56.077803  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.077813  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.077823  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.078602  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:00:56.078749  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:00:56.078777  106017 api_server.go:131] duration metric: took 6.080516ms to wait for apiserver health ...
	I1212 00:00:56.078787  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:00:56.251224  106017 request.go:632] Waited for 172.358728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251308  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251314  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.251322  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.251328  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.257604  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:56.263097  106017 system_pods.go:59] 17 kube-system pods found
	I1212 00:00:56.263131  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.263138  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.263146  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.263154  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.263159  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.263164  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.263168  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.263173  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.263179  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.263184  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.263191  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.263197  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.263203  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.263211  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.263216  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.263222  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.263228  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.263239  106017 system_pods.go:74] duration metric: took 184.44261ms to wait for pod list to return data ...
	I1212 00:00:56.263253  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:00:56.450737  106017 request.go:632] Waited for 187.395152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450799  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450805  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.450817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.450824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.455806  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.456064  106017 default_sa.go:45] found service account: "default"
	I1212 00:00:56.456083  106017 default_sa.go:55] duration metric: took 192.823176ms for default service account to be created ...
	I1212 00:00:56.456093  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:00:56.650300  106017 request.go:632] Waited for 194.107546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650380  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.650392  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.650403  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.656388  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:56.662029  106017 system_pods.go:86] 17 kube-system pods found
	I1212 00:00:56.662073  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.662082  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.662088  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.662094  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.662100  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.662108  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.662118  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.662124  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.662133  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.662140  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.662148  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.662153  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.662161  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.662165  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.662173  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.662178  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.662187  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.662196  106017 system_pods.go:126] duration metric: took 206.091251ms to wait for k8s-apps to be running ...
	I1212 00:00:56.662210  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:00:56.662262  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:56.679491  106017 system_svc.go:56] duration metric: took 17.268621ms WaitForService to wait for kubelet
	I1212 00:00:56.679526  106017 kubeadm.go:582] duration metric: took 24.537768524s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:00:56.679546  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:00:56.851276  106017 request.go:632] Waited for 171.630771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851341  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851347  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.851354  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.851363  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.856253  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.857605  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857634  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857650  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857655  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857661  106017 node_conditions.go:105] duration metric: took 178.109574ms to run NodePressure ...
	I1212 00:00:56.857683  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:00:56.857713  106017 start.go:255] writing updated cluster config ...
	I1212 00:00:56.859819  106017 out.go:201] 
	I1212 00:00:56.861355  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:56.861459  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.863133  106017 out.go:177] * Starting "ha-565823-m03" control-plane node in "ha-565823" cluster
	I1212 00:00:56.864330  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:00:56.864351  106017 cache.go:56] Caching tarball of preloaded images
	I1212 00:00:56.864443  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:00:56.864454  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:00:56.864537  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.864703  106017 start.go:360] acquireMachinesLock for ha-565823-m03: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:00:56.864743  106017 start.go:364] duration metric: took 22.236µs to acquireMachinesLock for "ha-565823-m03"
	I1212 00:00:56.864764  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:56.864862  106017 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1212 00:00:56.866313  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:00:56.866390  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:56.866430  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:56.881400  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1212 00:00:56.881765  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:56.882247  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:56.882274  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:56.882594  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:56.882778  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:00:56.882918  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:00:56.883084  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1212 00:00:56.883116  106017 client.go:168] LocalClient.Create starting
	I1212 00:00:56.883150  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:00:56.883194  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883215  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883281  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:00:56.883314  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883330  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883354  106017 main.go:141] libmachine: Running pre-create checks...
	I1212 00:00:56.883365  106017 main.go:141] libmachine: (ha-565823-m03) Calling .PreCreateCheck
	I1212 00:00:56.883572  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:00:56.883977  106017 main.go:141] libmachine: Creating machine...
	I1212 00:00:56.883994  106017 main.go:141] libmachine: (ha-565823-m03) Calling .Create
	I1212 00:00:56.884152  106017 main.go:141] libmachine: (ha-565823-m03) Creating KVM machine...
	I1212 00:00:56.885388  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing default KVM network
	I1212 00:00:56.885537  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing private KVM network mk-ha-565823
	I1212 00:00:56.885677  106017 main.go:141] libmachine: (ha-565823-m03) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:56.885696  106017 main.go:141] libmachine: (ha-565823-m03) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:00:56.885764  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:56.885674  106823 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:56.885859  106017 main.go:141] libmachine: (ha-565823-m03) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:00:57.157670  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.157529  106823 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa...
	I1212 00:00:57.207576  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207455  106823 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk...
	I1212 00:00:57.207627  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing magic tar header
	I1212 00:00:57.207643  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing SSH key tar header
	I1212 00:00:57.207726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207648  106823 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:57.207776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03
	I1212 00:00:57.207803  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 (perms=drwx------)
	I1212 00:00:57.207814  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:00:57.207826  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:57.207832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:00:57.207841  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:00:57.207846  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:00:57.207853  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home
	I1212 00:00:57.207859  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Skipping /home - not owner
	I1212 00:00:57.207869  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:00:57.207875  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:00:57.207903  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:00:57.207923  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:00:57.207937  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:00:57.207945  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:57.208764  106017 main.go:141] libmachine: (ha-565823-m03) define libvirt domain using xml: 
	I1212 00:00:57.208779  106017 main.go:141] libmachine: (ha-565823-m03) <domain type='kvm'>
	I1212 00:00:57.208785  106017 main.go:141] libmachine: (ha-565823-m03)   <name>ha-565823-m03</name>
	I1212 00:00:57.208790  106017 main.go:141] libmachine: (ha-565823-m03)   <memory unit='MiB'>2200</memory>
	I1212 00:00:57.208795  106017 main.go:141] libmachine: (ha-565823-m03)   <vcpu>2</vcpu>
	I1212 00:00:57.208799  106017 main.go:141] libmachine: (ha-565823-m03)   <features>
	I1212 00:00:57.208803  106017 main.go:141] libmachine: (ha-565823-m03)     <acpi/>
	I1212 00:00:57.208807  106017 main.go:141] libmachine: (ha-565823-m03)     <apic/>
	I1212 00:00:57.208816  106017 main.go:141] libmachine: (ha-565823-m03)     <pae/>
	I1212 00:00:57.208827  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.208832  106017 main.go:141] libmachine: (ha-565823-m03)   </features>
	I1212 00:00:57.208837  106017 main.go:141] libmachine: (ha-565823-m03)   <cpu mode='host-passthrough'>
	I1212 00:00:57.208849  106017 main.go:141] libmachine: (ha-565823-m03)   
	I1212 00:00:57.208858  106017 main.go:141] libmachine: (ha-565823-m03)   </cpu>
	I1212 00:00:57.208866  106017 main.go:141] libmachine: (ha-565823-m03)   <os>
	I1212 00:00:57.208875  106017 main.go:141] libmachine: (ha-565823-m03)     <type>hvm</type>
	I1212 00:00:57.208882  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='cdrom'/>
	I1212 00:00:57.208899  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='hd'/>
	I1212 00:00:57.208912  106017 main.go:141] libmachine: (ha-565823-m03)     <bootmenu enable='no'/>
	I1212 00:00:57.208918  106017 main.go:141] libmachine: (ha-565823-m03)   </os>
	I1212 00:00:57.208926  106017 main.go:141] libmachine: (ha-565823-m03)   <devices>
	I1212 00:00:57.208933  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='cdrom'>
	I1212 00:00:57.208946  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/boot2docker.iso'/>
	I1212 00:00:57.208957  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hdc' bus='scsi'/>
	I1212 00:00:57.208964  106017 main.go:141] libmachine: (ha-565823-m03)       <readonly/>
	I1212 00:00:57.208971  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.208981  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='disk'>
	I1212 00:00:57.208993  106017 main.go:141] libmachine: (ha-565823-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:00:57.209040  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk'/>
	I1212 00:00:57.209066  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hda' bus='virtio'/>
	I1212 00:00:57.209075  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.209092  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209105  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='mk-ha-565823'/>
	I1212 00:00:57.209114  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209125  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209136  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209145  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='default'/>
	I1212 00:00:57.209155  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209164  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209179  106017 main.go:141] libmachine: (ha-565823-m03)     <serial type='pty'>
	I1212 00:00:57.209191  106017 main.go:141] libmachine: (ha-565823-m03)       <target port='0'/>
	I1212 00:00:57.209198  106017 main.go:141] libmachine: (ha-565823-m03)     </serial>
	I1212 00:00:57.209211  106017 main.go:141] libmachine: (ha-565823-m03)     <console type='pty'>
	I1212 00:00:57.209219  106017 main.go:141] libmachine: (ha-565823-m03)       <target type='serial' port='0'/>
	I1212 00:00:57.209228  106017 main.go:141] libmachine: (ha-565823-m03)     </console>
	I1212 00:00:57.209238  106017 main.go:141] libmachine: (ha-565823-m03)     <rng model='virtio'>
	I1212 00:00:57.209275  106017 main.go:141] libmachine: (ha-565823-m03)       <backend model='random'>/dev/random</backend>
	I1212 00:00:57.209299  106017 main.go:141] libmachine: (ha-565823-m03)     </rng>
	I1212 00:00:57.209310  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209316  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209327  106017 main.go:141] libmachine: (ha-565823-m03)   </devices>
	I1212 00:00:57.209344  106017 main.go:141] libmachine: (ha-565823-m03) </domain>
	I1212 00:00:57.209358  106017 main.go:141] libmachine: (ha-565823-m03) 
	I1212 00:00:57.216296  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:a0:11:b6 in network default
	I1212 00:00:57.216833  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring networks are active...
	I1212 00:00:57.216849  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:57.217611  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network default is active
	I1212 00:00:57.217884  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network mk-ha-565823 is active
	I1212 00:00:57.218224  106017 main.go:141] libmachine: (ha-565823-m03) Getting domain xml...
	I1212 00:00:57.218920  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:58.452742  106017 main.go:141] libmachine: (ha-565823-m03) Waiting to get IP...
	I1212 00:00:58.453425  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.453790  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.453832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.453785  106823 retry.go:31] will retry after 272.104158ms: waiting for machine to come up
	I1212 00:00:58.727281  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.727898  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.727928  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.727841  106823 retry.go:31] will retry after 285.622453ms: waiting for machine to come up
	I1212 00:00:59.015493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.016037  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.016069  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.015997  106823 retry.go:31] will retry after 462.910385ms: waiting for machine to come up
	I1212 00:00:59.480661  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.481128  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.481154  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.481091  106823 retry.go:31] will retry after 428.639733ms: waiting for machine to come up
	I1212 00:00:59.911938  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.912474  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.912505  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.912415  106823 retry.go:31] will retry after 493.229639ms: waiting for machine to come up
	I1212 00:01:00.406997  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:00.407456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:00.407482  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:00.407400  106823 retry.go:31] will retry after 633.230425ms: waiting for machine to come up
	I1212 00:01:01.042449  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:01.042884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:01.042905  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:01.042838  106823 retry.go:31] will retry after 978.049608ms: waiting for machine to come up
	I1212 00:01:02.022776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:02.023212  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:02.023245  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:02.023153  106823 retry.go:31] will retry after 1.111513755s: waiting for machine to come up
	I1212 00:01:03.136308  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:03.136734  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:03.136763  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:03.136679  106823 retry.go:31] will retry after 1.728462417s: waiting for machine to come up
	I1212 00:01:04.867619  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:04.868118  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:04.868157  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:04.868052  106823 retry.go:31] will retry after 1.898297589s: waiting for machine to come up
	I1212 00:01:06.769272  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:06.769757  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:06.769825  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:06.769731  106823 retry.go:31] will retry after 1.922578081s: waiting for machine to come up
	I1212 00:01:08.693477  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:08.693992  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:08.694026  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:08.693918  106823 retry.go:31] will retry after 2.235570034s: waiting for machine to come up
	I1212 00:01:10.932341  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:10.932805  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:10.932827  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:10.932750  106823 retry.go:31] will retry after 4.200404272s: waiting for machine to come up
	I1212 00:01:15.136581  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:15.136955  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:15.136979  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:15.136906  106823 retry.go:31] will retry after 4.331994391s: waiting for machine to come up
	I1212 00:01:19.472184  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472659  106017 main.go:141] libmachine: (ha-565823-m03) Found IP for machine: 192.168.39.95
	I1212 00:01:19.472679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472686  106017 main.go:141] libmachine: (ha-565823-m03) Reserving static IP address...
	I1212 00:01:19.473105  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find host DHCP lease matching {name: "ha-565823-m03", mac: "52:54:00:03:bd:55", ip: "192.168.39.95"} in network mk-ha-565823
	I1212 00:01:19.544988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Getting to WaitForSSH function...
	I1212 00:01:19.545019  106017 main.go:141] libmachine: (ha-565823-m03) Reserved static IP address: 192.168.39.95
	I1212 00:01:19.545082  106017 main.go:141] libmachine: (ha-565823-m03) Waiting for SSH to be available...
	I1212 00:01:19.547914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548457  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.548493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548645  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH client type: external
	I1212 00:01:19.548672  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa (-rw-------)
	I1212 00:01:19.548700  106017 main.go:141] libmachine: (ha-565823-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:01:19.548714  106017 main.go:141] libmachine: (ha-565823-m03) DBG | About to run SSH command:
	I1212 00:01:19.548726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | exit 0
	I1212 00:01:19.675749  106017 main.go:141] libmachine: (ha-565823-m03) DBG | SSH cmd err, output: <nil>: 
	I1212 00:01:19.676029  106017 main.go:141] libmachine: (ha-565823-m03) KVM machine creation complete!
	I1212 00:01:19.676360  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:19.676900  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677088  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677296  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:01:19.677311  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetState
	I1212 00:01:19.678472  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:01:19.678488  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:01:19.678497  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:01:19.678505  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.680612  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.680988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.681021  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.681172  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.681326  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681449  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681545  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.681635  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.681832  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.681842  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:01:19.794939  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:19.794969  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:01:19.794980  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.797552  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.797884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.797916  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.798040  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.798220  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798369  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798507  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.798667  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.798834  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.798844  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:01:19.912451  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:01:19.912540  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:01:19.912555  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:01:19.912568  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912805  106017 buildroot.go:166] provisioning hostname "ha-565823-m03"
	I1212 00:01:19.912831  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912939  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.915606  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916027  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.916059  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916213  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.916386  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916533  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916630  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.916776  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.917012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.917027  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m03 && echo "ha-565823-m03" | sudo tee /etc/hostname
	I1212 00:01:20.047071  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m03
	
	I1212 00:01:20.047100  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.049609  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050009  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.050034  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050209  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.050389  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050537  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050700  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.050854  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.051086  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.051105  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:01:20.174838  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:20.174877  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:01:20.174898  106017 buildroot.go:174] setting up certificates
	I1212 00:01:20.174909  106017 provision.go:84] configureAuth start
	I1212 00:01:20.174924  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:20.175232  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.177664  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178007  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.178038  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178124  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.180472  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180778  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.180806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180963  106017 provision.go:143] copyHostCerts
	I1212 00:01:20.180995  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181046  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:01:20.181058  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181146  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:01:20.181242  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181266  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:01:20.181279  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181315  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:01:20.181387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181413  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:01:20.181419  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181456  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:01:20.181524  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m03 san=[127.0.0.1 192.168.39.95 ha-565823-m03 localhost minikube]
	I1212 00:01:20.442822  106017 provision.go:177] copyRemoteCerts
	I1212 00:01:20.442883  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:01:20.442916  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.445614  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.445950  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.445983  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.446122  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.446304  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.446460  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.446571  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.533808  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:01:20.533894  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:01:20.558631  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:01:20.558695  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:01:20.584088  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:01:20.584173  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:01:20.608061  106017 provision.go:87] duration metric: took 433.135165ms to configureAuth
	I1212 00:01:20.608090  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:01:20.608294  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:20.608371  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.611003  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611319  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.611348  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611489  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.611709  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.611885  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.612026  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.612174  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.612326  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.612341  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:01:20.847014  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:01:20.847049  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:01:20.847062  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetURL
	I1212 00:01:20.848448  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using libvirt version 6000000
	I1212 00:01:20.850813  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851216  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.851246  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851443  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:01:20.851459  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:01:20.851469  106017 client.go:171] duration metric: took 23.968343391s to LocalClient.Create
	I1212 00:01:20.851499  106017 start.go:167] duration metric: took 23.968416391s to libmachine.API.Create "ha-565823"
	I1212 00:01:20.851513  106017 start.go:293] postStartSetup for "ha-565823-m03" (driver="kvm2")
	I1212 00:01:20.851525  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:01:20.851547  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:20.851812  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:01:20.851848  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.854066  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854470  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.854498  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854683  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.854881  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.855047  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.855202  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.942769  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:01:20.947268  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:01:20.947295  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:01:20.947350  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:01:20.947427  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:01:20.947438  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:01:20.947517  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:01:20.957067  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:20.982552  106017 start.go:296] duration metric: took 131.024484ms for postStartSetup
	I1212 00:01:20.982610  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:20.983169  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.985456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.985914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.985943  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.986219  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:01:20.986450  106017 start.go:128] duration metric: took 24.12157496s to createHost
	I1212 00:01:20.986480  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.988832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989169  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.989192  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989296  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.989476  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989596  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989695  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.989852  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.990012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.990022  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:01:21.104340  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961681.076284817
	
	I1212 00:01:21.104366  106017 fix.go:216] guest clock: 1733961681.076284817
	I1212 00:01:21.104376  106017 fix.go:229] Guest: 2024-12-12 00:01:21.076284817 +0000 UTC Remote: 2024-12-12 00:01:20.986466192 +0000 UTC m=+151.148293246 (delta=89.818625ms)
	I1212 00:01:21.104397  106017 fix.go:200] guest clock delta is within tolerance: 89.818625ms
	I1212 00:01:21.104403  106017 start.go:83] releasing machines lock for "ha-565823-m03", held for 24.239651482s
	I1212 00:01:21.104427  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.104703  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:21.107255  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.107654  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.107680  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.109803  106017 out.go:177] * Found network options:
	I1212 00:01:21.111036  106017 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.103
	W1212 00:01:21.112272  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.112293  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.112306  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112787  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112963  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.113063  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:01:21.113107  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	W1212 00:01:21.113169  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.113192  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.113246  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:01:21.113266  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:21.115806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.115895  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116242  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116269  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116313  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116334  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116399  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116570  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116593  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116694  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116713  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116861  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116856  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.116989  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.354040  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:01:21.360555  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:01:21.360632  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:01:21.379750  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:01:21.379780  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:01:21.379863  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:01:21.395389  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:01:21.409350  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:01:21.409431  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:01:21.425472  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:01:21.440472  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:01:21.567746  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:01:21.711488  106017 docker.go:233] disabling docker service ...
	I1212 00:01:21.711577  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:01:21.727302  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:01:21.740916  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:01:21.878118  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:01:22.013165  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:01:22.031377  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:01:22.050768  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:01:22.050841  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.062469  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:01:22.062542  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.074854  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.085834  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.096567  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:01:22.110009  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.121122  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.139153  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.150221  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:01:22.160252  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:01:22.160329  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:01:22.175082  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:01:22.185329  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:22.327197  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:01:22.421776  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:01:22.421853  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:01:22.427874  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:01:22.427937  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:01:22.432412  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:01:22.478561  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:01:22.478659  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.507894  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.541025  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:01:22.542600  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:01:22.544205  106017 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.103
	I1212 00:01:22.545527  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:22.548679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549115  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:22.549143  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549402  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:01:22.553987  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:22.567227  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:01:22.567647  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:22.568059  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.568178  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.583960  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I1212 00:01:22.584451  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.584977  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.585002  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.585378  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.585624  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:01:22.587277  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:22.587636  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.587686  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.602128  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1212 00:01:22.602635  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.603141  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.603163  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.603490  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.603676  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:22.603824  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.95
	I1212 00:01:22.603837  106017 certs.go:194] generating shared ca certs ...
	I1212 00:01:22.603856  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.603989  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:01:22.604025  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:01:22.604035  106017 certs.go:256] generating profile certs ...
	I1212 00:01:22.604113  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:01:22.604138  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c
	I1212 00:01:22.604153  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.95 192.168.39.254]
	I1212 00:01:22.747110  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c ...
	I1212 00:01:22.747151  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c: {Name:mke6cc66706783f55b7ebb6ba30cc07d7c6eb29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747333  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c ...
	I1212 00:01:22.747345  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c: {Name:mk0abaf339db164c799eddef60276ad5fb5ed33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747431  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:01:22.747642  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:01:22.747827  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:01:22.747853  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:01:22.747874  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:01:22.747894  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:01:22.747911  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:01:22.747929  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:01:22.747949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:01:22.747967  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:01:22.767751  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:01:22.767871  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:01:22.767924  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:01:22.767939  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:01:22.767972  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:01:22.768009  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:01:22.768041  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:01:22.768088  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:22.768123  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:22.768140  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:01:22.768153  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:01:22.768246  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:22.771620  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772074  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:22.772105  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:22.772487  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:22.772661  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:22.772805  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:22.855976  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:01:22.862422  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:01:22.875336  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:01:22.881430  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:01:22.892620  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:01:22.897804  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:01:22.910746  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:01:22.916511  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:01:22.927437  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:01:22.932403  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:01:22.945174  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:01:22.949699  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:01:22.963425  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:01:22.991332  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:01:23.014716  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:01:23.038094  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:01:23.062120  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1212 00:01:23.086604  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:01:23.110420  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:01:23.136037  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:01:23.162577  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:01:23.188311  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:01:23.211713  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:01:23.235230  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:01:23.253375  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:01:23.271455  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:01:23.289505  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:01:23.307850  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:01:23.325848  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:01:23.344038  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:01:23.362393  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:01:23.368722  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:01:23.380405  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385472  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385534  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.392130  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:01:23.405241  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:01:23.418140  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422762  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422819  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.428754  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:01:23.441496  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:01:23.454394  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459170  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459227  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.465192  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:01:23.476720  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:01:23.481551  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:01:23.481615  106017 kubeadm.go:934] updating node {m03 192.168.39.95 8443 v1.31.2 crio true true} ...
	I1212 00:01:23.481715  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:01:23.481752  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:01:23.481784  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:01:23.499895  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:01:23.499971  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:01:23.500042  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.510617  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:01:23.510681  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.520696  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1212 00:01:23.520748  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:01:23.520697  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1212 00:01:23.520779  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520698  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:01:23.520844  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.520847  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520904  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.539476  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539619  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539628  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:01:23.539658  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:01:23.539704  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:01:23.539735  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:01:23.554300  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:01:23.554341  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:01:24.410276  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:01:24.421207  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 00:01:24.438691  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:01:24.456935  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:01:24.474104  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:01:24.478799  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:24.492116  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:24.635069  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:24.653898  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:24.654454  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:24.654529  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:24.669805  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 00:01:24.670391  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:24.671018  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:24.671047  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:24.671400  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:24.671580  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:24.671761  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:01:24.671883  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:01:24.671905  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:24.675034  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675479  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:24.675501  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675693  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:24.675871  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:24.676006  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:24.676127  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:24.845860  106017 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:24.845904  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I1212 00:01:47.124612  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (22.27867542s)
	I1212 00:01:47.124662  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:01:47.623528  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m03 minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:01:47.763869  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:01:47.919307  106017 start.go:319] duration metric: took 23.247542297s to joinCluster
	I1212 00:01:47.919407  106017 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:47.919784  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:47.920983  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:01:47.922471  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:48.195755  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:48.249445  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:01:48.249790  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:01:48.249881  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:01:48.250202  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:01:48.250300  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.250311  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.250329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.250338  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.255147  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:48.750647  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.750680  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.750691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.750699  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.755066  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:49.251152  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.251203  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.251216  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.251222  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.254927  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:49.751403  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.751424  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.751432  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.751436  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.754669  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.250595  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.250620  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.250629  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.250633  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.254009  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.254537  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:50.751206  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.751237  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.751250  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.751256  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.755159  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:51.250921  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.250950  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.250961  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.250967  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.255349  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:51.751245  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.751270  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.751283  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.751290  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.755162  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.250889  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.250916  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.250929  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.250935  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.254351  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.255115  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:52.750458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.750481  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.750492  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.750499  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.753763  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:53.251029  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.251058  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.251071  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.251077  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.256338  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:01:53.751364  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.751389  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.751401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.751414  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.754657  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.250629  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.250665  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.250675  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.250680  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.254457  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.255509  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:54.750450  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.750484  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.750496  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.750502  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.753928  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.251309  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.251338  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.251347  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.251351  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.254751  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.751050  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.751076  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.751089  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.751093  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.755810  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:56.250473  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.250504  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.250524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.250530  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.253711  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.751414  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.751435  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.751444  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.751449  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.755218  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.755864  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:57.251118  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.251142  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.251150  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.251154  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.254747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:57.750776  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.750806  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.750817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.750829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.754143  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.251295  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.251320  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.251329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.251333  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.254626  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.750576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.750599  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.750608  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.750611  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.754105  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.251173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.251200  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.251213  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.254355  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.255121  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:59.750953  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.750977  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.750985  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.750989  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.754627  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.250978  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.251004  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.251013  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.251016  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.254467  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.750877  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.750901  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.750912  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.750918  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.754221  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.251370  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.251393  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.251401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.251405  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.254805  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.255406  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:01.750655  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.750676  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.750684  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.750690  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.753736  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.251367  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.251390  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.251399  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.251403  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.255039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.750915  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.750948  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.750958  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.750964  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.754145  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:03.250760  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.250788  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.250798  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.250805  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.260534  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:03.261313  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:03.750548  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.750571  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.750582  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.750587  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.753887  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.250808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.250830  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.250838  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.250841  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.254163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.750428  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.750453  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.750464  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.750469  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.754235  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.251014  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.251038  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.251053  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.251061  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.254268  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.751257  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.751286  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.751300  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.751309  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.754346  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.755137  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:06.250474  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.250500  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.250510  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.250515  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.253901  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:06.751012  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.751043  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.751062  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.751067  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.755777  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:07.250458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.250481  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.250489  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.250494  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.254349  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.751140  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.751164  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.751172  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.751178  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.754545  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.755268  106017 node_ready.go:49] node "ha-565823-m03" has status "Ready":"True"
	I1212 00:02:07.755289  106017 node_ready.go:38] duration metric: took 19.505070997s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:02:07.755298  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:07.755371  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:07.755381  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.755388  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.755394  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.764865  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:07.771847  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.771957  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:02:07.771969  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.771979  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.771985  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.774662  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.775180  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.775197  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.775207  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.775212  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.778204  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.778657  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.778673  106017 pod_ready.go:82] duration metric: took 6.798091ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778684  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778739  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:02:07.778749  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.778759  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.778766  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.780968  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.781650  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.781667  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.781674  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.781679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.783908  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.784542  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.784564  106017 pod_ready.go:82] duration metric: took 5.872725ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784576  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784636  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:02:07.784644  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.784651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.784657  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.786892  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.787666  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.787681  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.787688  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.787694  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.789880  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.790470  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.790486  106017 pod_ready.go:82] duration metric: took 5.899971ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790494  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790537  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:02:07.790545  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.790552  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.790555  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.793137  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.793764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:07.793781  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.793791  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.793799  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.796241  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.796610  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.796625  106017 pod_ready.go:82] duration metric: took 6.124204ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.796636  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.952109  106017 request.go:632] Waited for 155.381921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952174  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952179  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.952187  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.952193  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.955641  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.151556  106017 request.go:632] Waited for 195.239119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151668  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151684  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.151694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.151702  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.154961  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.155639  106017 pod_ready.go:93] pod "etcd-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.155660  106017 pod_ready.go:82] duration metric: took 359.016335ms for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.155677  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.351679  106017 request.go:632] Waited for 195.932687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351780  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351790  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.351808  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.351821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.355049  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.552214  106017 request.go:632] Waited for 196.357688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552278  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552283  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.552291  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.552295  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.555420  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.555971  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.555995  106017 pod_ready.go:82] duration metric: took 400.310286ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.556009  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.752055  106017 request.go:632] Waited for 195.936446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752134  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752141  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.752152  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.752161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.755742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.951367  106017 request.go:632] Waited for 194.249731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951449  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951462  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.951477  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.951487  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.956306  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:08.956889  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.956911  106017 pod_ready.go:82] duration metric: took 400.890038ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.956924  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.152049  106017 request.go:632] Waited for 195.045457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152139  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152145  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.152153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.152158  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.155700  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.351978  106017 request.go:632] Waited for 195.381489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352057  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352066  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.352075  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.352081  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.355842  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.356358  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.356379  106017 pod_ready.go:82] duration metric: took 399.447689ms for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.356389  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.551411  106017 request.go:632] Waited for 194.933011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551471  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551476  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.551485  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.551489  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.554894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.751755  106017 request.go:632] Waited for 196.244381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751835  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751841  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.751848  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.751854  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.754952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.755722  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.755745  106017 pod_ready.go:82] duration metric: took 399.345607ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.755761  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.951966  106017 request.go:632] Waited for 196.120958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952068  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952080  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.952092  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.952104  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.955804  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.152052  106017 request.go:632] Waited for 195.597395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152141  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152152  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.152161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.152166  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.155038  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:10.155549  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.155569  106017 pod_ready.go:82] duration metric: took 399.796008ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.155583  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.351722  106017 request.go:632] Waited for 196.013906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351803  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351811  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.351826  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.351837  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.355190  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.551684  106017 request.go:632] Waited for 195.377569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551816  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.551824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.551829  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.555651  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.556178  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.556199  106017 pod_ready.go:82] duration metric: took 400.605936ms for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.556213  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.751531  106017 request.go:632] Waited for 195.242482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751632  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751654  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.751669  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.751679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.755253  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.951536  106017 request.go:632] Waited for 195.352907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951607  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951622  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.951633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.951641  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.954707  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.955175  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.955193  106017 pod_ready.go:82] duration metric: took 398.973413ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.955204  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.151212  106017 request.go:632] Waited for 195.914198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151269  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151274  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.151282  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.151285  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.154675  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.351669  106017 request.go:632] Waited for 196.350446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351765  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351776  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.351788  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.351796  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.354976  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.355603  106017 pod_ready.go:93] pod "kube-proxy-klpqs" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.355620  106017 pod_ready.go:82] duration metric: took 400.410567ms for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.355631  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.551803  106017 request.go:632] Waited for 196.076188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551880  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551892  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.551903  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.551915  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.555786  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.751843  106017 request.go:632] Waited for 195.375551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751907  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751912  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.751919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.751924  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.755210  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.755911  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.755936  106017 pod_ready.go:82] duration metric: took 400.297319ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.755951  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.951789  106017 request.go:632] Waited for 195.74885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951866  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951874  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.951891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.951904  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.955633  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.152006  106017 request.go:632] Waited for 195.692099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152097  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152112  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.152120  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.152125  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.155247  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.155984  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.156005  106017 pod_ready.go:82] duration metric: took 400.045384ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.156015  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.352045  106017 request.go:632] Waited for 195.938605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352121  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352126  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.352134  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.352143  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.355894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.551904  106017 request.go:632] Waited for 195.351995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551970  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551977  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.551988  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.551993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.555652  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.556289  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.556309  106017 pod_ready.go:82] duration metric: took 400.287227ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.556319  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.751148  106017 request.go:632] Waited for 194.747976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751231  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.751244  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.751260  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.754576  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.951572  106017 request.go:632] Waited for 196.386091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951678  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.951689  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.951693  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.954814  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.955311  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.955329  106017 pod_ready.go:82] duration metric: took 398.995551ms for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.955348  106017 pod_ready.go:39] duration metric: took 5.200033872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:12.955369  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:02:12.955437  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:02:12.971324  106017 api_server.go:72] duration metric: took 25.051879033s to wait for apiserver process to appear ...
	I1212 00:02:12.971354  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:02:12.971379  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:02:12.977750  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:02:12.977832  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:02:12.977843  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.977856  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.977863  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.978833  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:02:12.978904  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:02:12.978918  106017 api_server.go:131] duration metric: took 7.558877ms to wait for apiserver health ...
	I1212 00:02:12.978926  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:02:13.151199  106017 request.go:632] Waited for 172.198927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151292  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151303  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.151316  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.151325  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.157197  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:02:13.164153  106017 system_pods.go:59] 24 kube-system pods found
	I1212 00:02:13.164182  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.164187  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.164191  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.164194  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.164197  106017 system_pods.go:61] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.164200  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.164203  106017 system_pods.go:61] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.164206  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.164209  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.164211  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.164214  106017 system_pods.go:61] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.164218  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.164221  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.164224  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.164227  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.164230  106017 system_pods.go:61] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.164233  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.164236  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.164240  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.164243  106017 system_pods.go:61] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.164246  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.164249  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.164251  106017 system_pods.go:61] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.164254  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.164259  106017 system_pods.go:74] duration metric: took 185.327636ms to wait for pod list to return data ...
	I1212 00:02:13.164271  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:02:13.351702  106017 request.go:632] Waited for 187.33366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351785  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351793  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.351804  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.351814  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.355589  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.355716  106017 default_sa.go:45] found service account: "default"
	I1212 00:02:13.355732  106017 default_sa.go:55] duration metric: took 191.453257ms for default service account to be created ...
	I1212 00:02:13.355741  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:02:13.552179  106017 request.go:632] Waited for 196.355674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552246  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552253  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.552265  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.552274  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.558546  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:02:13.567311  106017 system_pods.go:86] 24 kube-system pods found
	I1212 00:02:13.567335  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.567341  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.567345  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.567349  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.567352  106017 system_pods.go:89] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.567355  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.567359  106017 system_pods.go:89] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.567362  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.567366  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.567369  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.567373  106017 system_pods.go:89] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.567377  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.567380  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.567384  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.567387  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.567390  106017 system_pods.go:89] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.567393  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.567396  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.567400  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.567404  106017 system_pods.go:89] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.567406  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.567411  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.567416  106017 system_pods.go:89] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.567419  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.567425  106017 system_pods.go:126] duration metric: took 211.677185ms to wait for k8s-apps to be running ...
	I1212 00:02:13.567435  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:02:13.567479  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:02:13.584100  106017 system_svc.go:56] duration metric: took 16.645631ms WaitForService to wait for kubelet
	I1212 00:02:13.584137  106017 kubeadm.go:582] duration metric: took 25.664696546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:02:13.584164  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:02:13.751620  106017 request.go:632] Waited for 167.335283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751682  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751687  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.751694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.751707  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.755649  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.756501  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756522  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756532  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756535  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756538  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756541  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756545  106017 node_conditions.go:105] duration metric: took 172.375714ms to run NodePressure ...
	I1212 00:02:13.756565  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:02:13.756588  106017 start.go:255] writing updated cluster config ...
	I1212 00:02:13.756868  106017 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:13.808453  106017 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 00:02:13.810275  106017 out.go:177] * Done! kubectl is now configured to use "ha-565823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.528181111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961969528148432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=96747b45-1fd3-4ec2-8255-d464d22471cc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.529465316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2433188-ddf9-4691-9dc2-8ea872413140 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.529563547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2433188-ddf9-4691-9dc2-8ea872413140 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.529908419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2433188-ddf9-4691-9dc2-8ea872413140 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.580408336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2f95020-2a0f-4454-a148-7fac61b348ae name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.580499236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2f95020-2a0f-4454-a148-7fac61b348ae name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.582362920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53202611-c2b8-436d-bfa8-513bbd4769df name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.582803592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961969582783354,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53202611-c2b8-436d-bfa8-513bbd4769df name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.583460074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dffe5b0d-37f8-4eae-9b61-6f6307c2a33e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.583529090Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dffe5b0d-37f8-4eae-9b61-6f6307c2a33e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.584013992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dffe5b0d-37f8-4eae-9b61-6f6307c2a33e name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.625274333Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5da51a04-855b-43d2-ba46-ea91e6805f36 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.625363198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5da51a04-855b-43d2-ba46-ea91e6805f36 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.626732790Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dc62c33-df1a-4576-9207-d4229e9ea844 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.627385422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961969627362372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dc62c33-df1a-4576-9207-d4229e9ea844 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.627995440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a67c054-fb15-423a-9717-4718a743ef29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.628114789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a67c054-fb15-423a-9717-4718a743ef29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.628357173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a67c054-fb15-423a-9717-4718a743ef29 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.669127069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=160aa173-cd0d-4b5e-bb56-944a29318634 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.669228152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=160aa173-cd0d-4b5e-bb56-944a29318634 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.670554183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=428b9f89-fbec-429a-8962-0b5a947af0df name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.671293282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961969671264551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=428b9f89-fbec-429a-8962-0b5a947af0df name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.672155876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48d00a80-0a58-423e-a7e5-5d771d409cc5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.672226267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48d00a80-0a58-423e-a7e5-5d771d409cc5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:09 ha-565823 crio[664]: time="2024-12-12 00:06:09.672453218Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48d00a80-0a58-423e-a7e5-5d771d409cc5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0043af06cb92       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0d77818a442ce       busybox-7dff88458-x4p94
	999ac64245591       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ab4dd7022ef59       coredns-7c65d6cfc9-mqzbv
	0beb663c1a28f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   2787b4f317bfa       coredns-7c65d6cfc9-4q46c
	ba4c8c97ea090       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4161eb9de6ddb       storage-provisioner
	bfdacc6be0aee       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   332b05e74370f       kindnet-hz9rk
	514637eeaa812       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   920e405616cde       kube-proxy-hr5qc
	768be9c254101       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   87c6df22f8976       kube-vip-ha-565823
	452c6d19b2de9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   0ab557e831fb3       kube-controller-manager-ha-565823
	743ae8ccc81f5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e6c331c3b3439       etcd-ha-565823
	4f25ff314c2e8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d851e6de61a68       kube-apiserver-ha-565823
	b28e7b492cfe7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6c5b082d1924       kube-scheduler-ha-565823
	
	
	==> coredns [0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3] <==
	[INFO] 10.244.1.2:40894 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004450385s
	[INFO] 10.244.1.2:47929 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225565s
	[INFO] 10.244.1.2:51252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126773s
	[INFO] 10.244.1.2:47545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126535s
	[INFO] 10.244.1.2:37654 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119814s
	[INFO] 10.244.2.2:44808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015021s
	[INFO] 10.244.2.2:48775 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815223s
	[INFO] 10.244.2.2:56148 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132782s
	[INFO] 10.244.2.2:57998 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133493s
	[INFO] 10.244.0.4:39053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087907s
	[INFO] 10.244.0.4:34059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001091775s
	[INFO] 10.244.1.2:56415 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000835348s
	[INFO] 10.244.1.2:46751 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114147s
	[INFO] 10.244.1.2:35096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100606s
	[INFO] 10.244.2.2:40358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136169s
	[INFO] 10.244.2.2:56318 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204673s
	[INFO] 10.244.0.4:34528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012651s
	[INFO] 10.244.1.2:56678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145563s
	[INFO] 10.244.1.2:43671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000363816s
	[INFO] 10.244.1.2:48047 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136942s
	[INFO] 10.244.1.2:35425 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019653s
	[INFO] 10.244.2.2:59862 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112519s
	[INFO] 10.244.0.4:33935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108695s
	[INFO] 10.244.0.4:51044 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115709s
	[INFO] 10.244.0.4:40489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092799s
	
	
	==> coredns [999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481] <==
	[INFO] 10.244.0.4:33301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137834s
	[INFO] 10.244.0.4:55709 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001541208s
	[INFO] 10.244.0.4:59133 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001387137s
	[INFO] 10.244.1.2:35268 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004904013s
	[INFO] 10.244.1.2:45390 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166839s
	[INFO] 10.244.2.2:51385 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248421s
	[INFO] 10.244.2.2:33701 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001310625s
	[INFO] 10.244.2.2:48335 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124081s
	[INFO] 10.244.2.2:58439 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000278252s
	[INFO] 10.244.0.4:51825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131036s
	[INFO] 10.244.0.4:54179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001798071s
	[INFO] 10.244.0.4:38851 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094604s
	[INFO] 10.244.0.4:48660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050194s
	[INFO] 10.244.0.4:57598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082654s
	[INFO] 10.244.0.4:43576 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100662s
	[INFO] 10.244.1.2:60988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015105s
	[INFO] 10.244.2.2:60481 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130341s
	[INFO] 10.244.2.2:48427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079579s
	[INFO] 10.244.0.4:39760 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227961s
	[INFO] 10.244.0.4:48093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090061s
	[INFO] 10.244.0.4:37075 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076033s
	[INFO] 10.244.2.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258305s
	[INFO] 10.244.2.2:40866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177114s
	[INFO] 10.244.2.2:58880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137362s
	[INFO] 10.244.0.4:60821 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179152s
	
	
	==> describe nodes <==
	Name:               ha-565823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:59:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-565823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 344476ebea784ce5952c6b9d7486bfc2
	  System UUID:                344476eb-ea78-4ce5-952c-6b9d7486bfc2
	  Boot ID:                    cf8379f5-6946-439d-a3d4-fa7d39c2dea7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4p94              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 coredns-7c65d6cfc9-4q46c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 coredns-7c65d6cfc9-mqzbv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m29s
	  kube-system                 etcd-ha-565823                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m34s
	  kube-system                 kindnet-hz9rk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m30s
	  kube-system                 kube-apiserver-ha-565823             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-ha-565823    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-proxy-hr5qc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-scheduler-ha-565823             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-vip-ha-565823                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m28s  kube-proxy       
	  Normal  Starting                 6m34s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m34s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m34s  kubelet          Node ha-565823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s  kubelet          Node ha-565823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m34s  kubelet          Node ha-565823 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m30s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  NodeReady                6m12s  kubelet          Node ha-565823 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  RegisteredNode           4m16s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	
	
	Name:               ha-565823-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:00:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:03:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-565823-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9273c598fccb4678bf93616ea428fab5
	  System UUID:                9273c598-fccb-4678-bf93-616ea428fab5
	  Boot ID:                    73eb7add-f6da-422d-ad45-9773172878c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nsw2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-565823-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m39s
	  kube-system                 kindnet-kr5js                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m41s
	  kube-system                 kube-apiserver-ha-565823-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-ha-565823-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-p2lsd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-ha-565823-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-vip-ha-565823-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node ha-565823-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m36s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-565823-m02 status is now: NodeNotReady
	
	
	Name:               ha-565823-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:01:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:02:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-565823-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7cdc3cdb36e495abaa3ddda542ce8f6
	  System UUID:                a7cdc3cd-b36e-495a-baa3-ddda542ce8f6
	  Boot ID:                    e8069ced-7862-4741-8f56-298b003d0b4d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s8nmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-565823-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m24s
	  kube-system                 kindnet-jffrr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m26s
	  kube-system                 kube-apiserver-ha-565823-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-controller-manager-ha-565823-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-proxy-klpqs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  kube-system                 kube-scheduler-ha-565823-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-vip-ha-565823-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m26s (x8 over 4m26s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s (x8 over 4m26s)  kubelet          Node ha-565823-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s (x7 over 4m26s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	
	
	Name:               ha-565823-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_02_54_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:02:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:03:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-565823-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9da6268e700e4cc18f576f10f66d598f
	  System UUID:                9da6268e-700e-4cc1-8f57-6f10f66d598f
	  Boot ID:                    20440ea1-d260-49fc-a678-9a23de1ac4f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6qk4d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m16s
	  kube-system                 kube-proxy-j59sb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m10s                  kube-proxy       
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m16s (x2 over 3m16s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m16s (x2 over 3m16s)  kubelet          Node ha-565823-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m16s (x2 over 3m16s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-565823-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053078] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041942] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec11 23:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.625477] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.503596] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.061991] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056761] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.187047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.124910] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.280035] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.149659] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.048783] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.069316] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.737553] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.583447] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +5.823487] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.790300] kauditd_printk_skb: 34 callbacks suppressed
	[Dec12 00:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b] <==
	{"level":"warn","ts":"2024-12-12T00:06:09.958324Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.969603Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.977291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.983546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.989343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.989678Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.991677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:09.993349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.002624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.003366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.013610Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.021996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.026630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.030017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.038385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.045246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.051761Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.055098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.055827Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.058811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.061980Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.068227Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.074828Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.095587Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:10.101438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:06:10 up 7 min,  0 users,  load average: 0.05, 0.16, 0.09
	Linux ha-565823 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098] <==
	I1212 00:05:37.120430       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119691       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:47.119737       1 main.go:301] handling current node
	I1212 00:05:47.119753       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:47.119758       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119987       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:47.119994       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:47.120217       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:47.120242       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:05:57.128438       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:57.128810       1 main.go:301] handling current node
	I1212 00:05:57.128927       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:57.128989       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:57.129767       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:57.129834       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:57.130023       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:57.130046       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:06:07.120193       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:06:07.120286       1 main.go:301] handling current node
	I1212 00:06:07.120313       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:06:07.120331       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:06:07.120614       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:06:07.120667       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:06:07.120856       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:06:07.120887       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95] <==
	I1211 23:59:33.823962       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1211 23:59:33.879965       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1211 23:59:33.896294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I1211 23:59:33.897349       1 controller.go:615] quota admission added evaluator for: endpoints
	I1211 23:59:33.902931       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 23:59:34.842734       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1211 23:59:35.374409       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1211 23:59:35.395837       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1211 23:59:35.560177       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1211 23:59:39.944410       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1211 23:59:40.344123       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1212 00:02:22.272920       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55802: use of closed network connection
	E1212 00:02:22.464756       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55828: use of closed network connection
	E1212 00:02:22.651355       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55850: use of closed network connection
	E1212 00:02:23.038043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55874: use of closed network connection
	E1212 00:02:23.226745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55900: use of closed network connection
	E1212 00:02:23.410000       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55904: use of closed network connection
	E1212 00:02:23.591256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55924: use of closed network connection
	E1212 00:02:23.770667       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55932: use of closed network connection
	E1212 00:02:24.076679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55962: use of closed network connection
	E1212 00:02:24.252739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55982: use of closed network connection
	E1212 00:02:24.461578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56012: use of closed network connection
	E1212 00:02:24.646238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56034: use of closed network connection
	E1212 00:02:24.817848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56044: use of closed network connection
	E1212 00:02:24.999617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56060: use of closed network connection
	
	
	==> kube-controller-manager [452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1] <==
	I1212 00:02:54.484626       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565823-m04" podCIDRs=["10.244.3.0/24"]
	I1212 00:02:54.484689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.484721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.500323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.636444       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565823-m04"
	I1212 00:02:54.652045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.687694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:55.082775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.485970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.555718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.675906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.734910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:04.836593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:03:16.485293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:17.501671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:25.341676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:04:14.668472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.669356       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:04:14.705380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.785686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.151428ms"
	I1212 00:04:14.785837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="78.406µs"
	I1212 00:04:18.764949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:19.939887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	
	
	==> kube-proxy [514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1211 23:59:41.687183       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1211 23:59:41.713699       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E1211 23:59:41.713883       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:59:41.760766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1211 23:59:41.760924       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:59:41.761009       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:59:41.764268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:59:41.765555       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:59:41.765710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:59:41.768630       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:59:41.769016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:59:41.769876       1 config.go:199] "Starting service config controller"
	I1211 23:59:41.769889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:59:41.771229       1 config.go:328] "Starting node config controller"
	I1211 23:59:41.771259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:59:41.871443       1 shared_informer.go:320] Caches are synced for node config
	I1211 23:59:41.871633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:59:41.871849       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4] <==
	E1211 23:59:33.413263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1211 23:59:35.297693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:02:14.658309       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="bc1a3365-d32e-42cc-b58c-95a59e72d54b" pod="default/busybox-7dff88458-nsw2n" assumedNode="ha-565823-m02" currentNode="ha-565823-m03"
	E1212 00:02:14.675240       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m03"
	E1212 00:02:14.679553       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bc1a3365-d32e-42cc-b58c-95a59e72d54b(default/busybox-7dff88458-nsw2n) was assumed on ha-565823-m03 but assigned to ha-565823-m02" pod="default/busybox-7dff88458-nsw2n"
	E1212 00:02:14.680513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" pod="default/busybox-7dff88458-nsw2n"
	I1212 00:02:14.680708       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m02"
	E1212 00:02:14.899144       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-vn6xg is already present in the active queue" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:14.936687       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-vn6xg\" not found" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:54.574668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.578200       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.581395       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b52adb65-9292-42b8-bca8-b4a44c756e15(kube-system/kube-proxy-j59sb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j59sb"
	E1212 00:02:54.582857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-j59sb"
	I1212 00:02:54.582977       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.583674       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ba90dda-f093-4ba3-abad-427394ebe334(kube-system/kindnet-6qk4d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6qk4d"
	E1212 00:02:54.583943       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-6qk4d"
	I1212 00:02:54.584002       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.639291       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.640439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2061489e-9108-4e76-af40-2fcc1540357b(kube-system/kube-proxy-lbbhs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lbbhs"
	E1212 00:02:54.640623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-lbbhs"
	I1212 00:02:54.640743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.639802       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	E1212 00:02:54.641599       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bd86f21-f17e-4d19-8bac-53393aecda0b(kube-system/kindnet-pfdgd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pfdgd"
	E1212 00:02:54.641728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-pfdgd"
	I1212 00:02:54.641865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	
	
	==> kubelet <==
	Dec 12 00:04:35 ha-565823 kubelet[1304]: E1212 00:04:35.644561    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961875641522910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:35 ha-565823 kubelet[1304]: E1212 00:04:35.644914    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961875641522910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646672    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646986    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649177    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649229    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650905    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650951    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652272    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652343    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.654671    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.655016    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.529805    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657687    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657712    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659792    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659845    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.661887    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.662031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:06:05 ha-565823 kubelet[1304]: E1212 00:06:05.663647    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961965663423234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:06:05 ha-565823 kubelet[1304]: E1212 00:06:05.663687    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961965663423234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565823 -n ha-565823
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.894704575s)
ha_test.go:309: expected profile "ha-565823" in json of 'profile list' to have "HAppy" status but have "" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-565823\",\"Status\":\"\",\"Config\":{\"Name\":\"ha-565823\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort
\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-565823\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.19\",\"Port\":8443,\"KubernetesVersion\":\"
v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.103\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.95\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.247\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logvi
ewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\"
,\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565823 -n ha-565823
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 logs -n 25: (1.394441621s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m03_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m04 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp testdata/cp-test.txt                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m03 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565823 node stop m02 -v=7                                                     | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565823 node start m02 -v=7                                                    | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:58:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:58:49.879098  106017 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:58:49.879215  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879223  106017 out.go:358] Setting ErrFile to fd 2...
	I1211 23:58:49.879228  106017 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:49.879424  106017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:58:49.880067  106017 out.go:352] Setting JSON to false
	I1211 23:58:49.880934  106017 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9672,"bootTime":1733951858,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:58:49.881036  106017 start.go:139] virtualization: kvm guest
	I1211 23:58:49.883482  106017 out.go:177] * [ha-565823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:58:49.884859  106017 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:58:49.884853  106017 notify.go:220] Checking for updates...
	I1211 23:58:49.887649  106017 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:58:49.889057  106017 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:58:49.890422  106017 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:49.891732  106017 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:58:49.893196  106017 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:58:49.894834  106017 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:58:49.929647  106017 out.go:177] * Using the kvm2 driver based on user configuration
	I1211 23:58:49.931090  106017 start.go:297] selected driver: kvm2
	I1211 23:58:49.931102  106017 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:58:49.931118  106017 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:58:49.931896  106017 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.931980  106017 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:58:49.946877  106017 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:58:49.946925  106017 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:58:49.947184  106017 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1211 23:58:49.947219  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:58:49.947291  106017 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1211 23:58:49.947306  106017 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1211 23:58:49.947387  106017 start.go:340] cluster config:
	{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1211 23:58:49.947534  106017 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:58:49.949244  106017 out.go:177] * Starting "ha-565823" primary control-plane node in "ha-565823" cluster
	I1211 23:58:49.950461  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:58:49.950504  106017 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:58:49.950517  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:58:49.950593  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:58:49.950607  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:58:49.950924  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:58:49.950947  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json: {Name:mk87ab89a0730849be8d507f8c0453b4c014ad9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:58:49.951100  106017 start.go:360] acquireMachinesLock for ha-565823: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:58:49.951143  106017 start.go:364] duration metric: took 25.725µs to acquireMachinesLock for "ha-565823"
	I1211 23:58:49.951167  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:58:49.951248  106017 start.go:125] createHost starting for "" (driver="kvm2")
	I1211 23:58:49.952920  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:58:49.953077  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:49.953130  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:49.967497  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I1211 23:58:49.967981  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:49.968550  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:58:49.968587  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:49.968981  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:49.969194  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:58:49.969410  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:58:49.969566  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:58:49.969614  106017 client.go:168] LocalClient.Create starting
	I1211 23:58:49.969660  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:58:49.969702  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969727  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969804  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:58:49.969833  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:58:49.969852  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:58:49.969875  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:58:49.969887  106017 main.go:141] libmachine: (ha-565823) Calling .PreCreateCheck
	I1211 23:58:49.970228  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:58:49.970579  106017 main.go:141] libmachine: Creating machine...
	I1211 23:58:49.970592  106017 main.go:141] libmachine: (ha-565823) Calling .Create
	I1211 23:58:49.970720  106017 main.go:141] libmachine: (ha-565823) Creating KVM machine...
	I1211 23:58:49.971894  106017 main.go:141] libmachine: (ha-565823) DBG | found existing default KVM network
	I1211 23:58:49.972543  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:49.972397  106042 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231e0}
	I1211 23:58:49.972595  106017 main.go:141] libmachine: (ha-565823) DBG | created network xml: 
	I1211 23:58:49.972612  106017 main.go:141] libmachine: (ha-565823) DBG | <network>
	I1211 23:58:49.972619  106017 main.go:141] libmachine: (ha-565823) DBG |   <name>mk-ha-565823</name>
	I1211 23:58:49.972628  106017 main.go:141] libmachine: (ha-565823) DBG |   <dns enable='no'/>
	I1211 23:58:49.972641  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972653  106017 main.go:141] libmachine: (ha-565823) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1211 23:58:49.972659  106017 main.go:141] libmachine: (ha-565823) DBG |     <dhcp>
	I1211 23:58:49.972666  106017 main.go:141] libmachine: (ha-565823) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1211 23:58:49.972678  106017 main.go:141] libmachine: (ha-565823) DBG |     </dhcp>
	I1211 23:58:49.972689  106017 main.go:141] libmachine: (ha-565823) DBG |   </ip>
	I1211 23:58:49.972696  106017 main.go:141] libmachine: (ha-565823) DBG |   
	I1211 23:58:49.972705  106017 main.go:141] libmachine: (ha-565823) DBG | </network>
	I1211 23:58:49.972742  106017 main.go:141] libmachine: (ha-565823) DBG | 
	I1211 23:58:49.977592  106017 main.go:141] libmachine: (ha-565823) DBG | trying to create private KVM network mk-ha-565823 192.168.39.0/24...
	I1211 23:58:50.045920  106017 main.go:141] libmachine: (ha-565823) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.045945  106017 main.go:141] libmachine: (ha-565823) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:58:50.045957  106017 main.go:141] libmachine: (ha-565823) DBG | private KVM network mk-ha-565823 192.168.39.0/24 created
	I1211 23:58:50.045974  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.045851  106042 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.046037  106017 main.go:141] libmachine: (ha-565823) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:58:50.332532  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.332355  106042 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa...
	I1211 23:58:50.607374  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607211  106042 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk...
	I1211 23:58:50.607405  106017 main.go:141] libmachine: (ha-565823) DBG | Writing magic tar header
	I1211 23:58:50.607418  106017 main.go:141] libmachine: (ha-565823) DBG | Writing SSH key tar header
	I1211 23:58:50.607425  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:50.607336  106042 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 ...
	I1211 23:58:50.607436  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823
	I1211 23:58:50.607514  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:58:50.607560  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823 (perms=drwx------)
	I1211 23:58:50.607571  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:50.607581  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:58:50.607606  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:58:50.607624  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:58:50.607642  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:58:50.607654  106017 main.go:141] libmachine: (ha-565823) DBG | Checking permissions on dir: /home
	I1211 23:58:50.607666  106017 main.go:141] libmachine: (ha-565823) DBG | Skipping /home - not owner
	I1211 23:58:50.607678  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:58:50.607687  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:58:50.607693  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:58:50.607704  106017 main.go:141] libmachine: (ha-565823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:58:50.607717  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:50.608802  106017 main.go:141] libmachine: (ha-565823) define libvirt domain using xml: 
	I1211 23:58:50.608821  106017 main.go:141] libmachine: (ha-565823) <domain type='kvm'>
	I1211 23:58:50.608828  106017 main.go:141] libmachine: (ha-565823)   <name>ha-565823</name>
	I1211 23:58:50.608832  106017 main.go:141] libmachine: (ha-565823)   <memory unit='MiB'>2200</memory>
	I1211 23:58:50.608838  106017 main.go:141] libmachine: (ha-565823)   <vcpu>2</vcpu>
	I1211 23:58:50.608842  106017 main.go:141] libmachine: (ha-565823)   <features>
	I1211 23:58:50.608846  106017 main.go:141] libmachine: (ha-565823)     <acpi/>
	I1211 23:58:50.608850  106017 main.go:141] libmachine: (ha-565823)     <apic/>
	I1211 23:58:50.608857  106017 main.go:141] libmachine: (ha-565823)     <pae/>
	I1211 23:58:50.608868  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.608875  106017 main.go:141] libmachine: (ha-565823)   </features>
	I1211 23:58:50.608879  106017 main.go:141] libmachine: (ha-565823)   <cpu mode='host-passthrough'>
	I1211 23:58:50.608887  106017 main.go:141] libmachine: (ha-565823)   
	I1211 23:58:50.608891  106017 main.go:141] libmachine: (ha-565823)   </cpu>
	I1211 23:58:50.608898  106017 main.go:141] libmachine: (ha-565823)   <os>
	I1211 23:58:50.608902  106017 main.go:141] libmachine: (ha-565823)     <type>hvm</type>
	I1211 23:58:50.608977  106017 main.go:141] libmachine: (ha-565823)     <boot dev='cdrom'/>
	I1211 23:58:50.609011  106017 main.go:141] libmachine: (ha-565823)     <boot dev='hd'/>
	I1211 23:58:50.609024  106017 main.go:141] libmachine: (ha-565823)     <bootmenu enable='no'/>
	I1211 23:58:50.609036  106017 main.go:141] libmachine: (ha-565823)   </os>
	I1211 23:58:50.609043  106017 main.go:141] libmachine: (ha-565823)   <devices>
	I1211 23:58:50.609052  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='cdrom'>
	I1211 23:58:50.609063  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/boot2docker.iso'/>
	I1211 23:58:50.609074  106017 main.go:141] libmachine: (ha-565823)       <target dev='hdc' bus='scsi'/>
	I1211 23:58:50.609085  106017 main.go:141] libmachine: (ha-565823)       <readonly/>
	I1211 23:58:50.609094  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609105  106017 main.go:141] libmachine: (ha-565823)     <disk type='file' device='disk'>
	I1211 23:58:50.609117  106017 main.go:141] libmachine: (ha-565823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:58:50.609133  106017 main.go:141] libmachine: (ha-565823)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/ha-565823.rawdisk'/>
	I1211 23:58:50.609144  106017 main.go:141] libmachine: (ha-565823)       <target dev='hda' bus='virtio'/>
	I1211 23:58:50.609154  106017 main.go:141] libmachine: (ha-565823)     </disk>
	I1211 23:58:50.609164  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609176  106017 main.go:141] libmachine: (ha-565823)       <source network='mk-ha-565823'/>
	I1211 23:58:50.609187  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609198  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609209  106017 main.go:141] libmachine: (ha-565823)     <interface type='network'>
	I1211 23:58:50.609221  106017 main.go:141] libmachine: (ha-565823)       <source network='default'/>
	I1211 23:58:50.609230  106017 main.go:141] libmachine: (ha-565823)       <model type='virtio'/>
	I1211 23:58:50.609240  106017 main.go:141] libmachine: (ha-565823)     </interface>
	I1211 23:58:50.609249  106017 main.go:141] libmachine: (ha-565823)     <serial type='pty'>
	I1211 23:58:50.609271  106017 main.go:141] libmachine: (ha-565823)       <target port='0'/>
	I1211 23:58:50.609292  106017 main.go:141] libmachine: (ha-565823)     </serial>
	I1211 23:58:50.609319  106017 main.go:141] libmachine: (ha-565823)     <console type='pty'>
	I1211 23:58:50.609342  106017 main.go:141] libmachine: (ha-565823)       <target type='serial' port='0'/>
	I1211 23:58:50.609358  106017 main.go:141] libmachine: (ha-565823)     </console>
	I1211 23:58:50.609368  106017 main.go:141] libmachine: (ha-565823)     <rng model='virtio'>
	I1211 23:58:50.609380  106017 main.go:141] libmachine: (ha-565823)       <backend model='random'>/dev/random</backend>
	I1211 23:58:50.609388  106017 main.go:141] libmachine: (ha-565823)     </rng>
	I1211 23:58:50.609393  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609399  106017 main.go:141] libmachine: (ha-565823)     
	I1211 23:58:50.609404  106017 main.go:141] libmachine: (ha-565823)   </devices>
	I1211 23:58:50.609412  106017 main.go:141] libmachine: (ha-565823) </domain>
	I1211 23:58:50.609425  106017 main.go:141] libmachine: (ha-565823) 
	I1211 23:58:50.614253  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:5a:5d:6a in network default
	I1211 23:58:50.614867  106017 main.go:141] libmachine: (ha-565823) Ensuring networks are active...
	I1211 23:58:50.614888  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:50.615542  106017 main.go:141] libmachine: (ha-565823) Ensuring network default is active
	I1211 23:58:50.615828  106017 main.go:141] libmachine: (ha-565823) Ensuring network mk-ha-565823 is active
	I1211 23:58:50.616242  106017 main.go:141] libmachine: (ha-565823) Getting domain xml...
	I1211 23:58:50.616898  106017 main.go:141] libmachine: (ha-565823) Creating domain...
	I1211 23:58:51.817451  106017 main.go:141] libmachine: (ha-565823) Waiting to get IP...
	I1211 23:58:51.818184  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:51.818533  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:51.818576  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:51.818514  106042 retry.go:31] will retry after 280.301496ms: waiting for machine to come up
	I1211 23:58:52.100046  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.100502  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.100533  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.100451  106042 retry.go:31] will retry after 276.944736ms: waiting for machine to come up
	I1211 23:58:52.378928  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.379349  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.379382  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.379295  106042 retry.go:31] will retry after 389.022589ms: waiting for machine to come up
	I1211 23:58:52.769835  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:52.770314  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:52.770357  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:52.770269  106042 retry.go:31] will retry after 542.492277ms: waiting for machine to come up
	I1211 23:58:53.313855  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:53.314281  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:53.314305  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:53.314231  106042 retry.go:31] will retry after 742.209465ms: waiting for machine to come up
	I1211 23:58:54.058032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.058453  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.058490  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.058433  106042 retry.go:31] will retry after 754.421967ms: waiting for machine to come up
	I1211 23:58:54.814555  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:54.814980  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:54.815017  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:54.814915  106042 retry.go:31] will retry after 802.576471ms: waiting for machine to come up
	I1211 23:58:55.619852  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:55.620325  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:55.620362  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:55.620271  106042 retry.go:31] will retry after 1.192308346s: waiting for machine to come up
	I1211 23:58:56.815553  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:56.816025  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:56.816050  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:56.815966  106042 retry.go:31] will retry after 1.618860426s: waiting for machine to come up
	I1211 23:58:58.436766  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:58:58.437231  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:58:58.437256  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:58:58.437186  106042 retry.go:31] will retry after 2.219805666s: waiting for machine to come up
	I1211 23:59:00.658607  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:00.659028  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:00.659058  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:00.658968  106042 retry.go:31] will retry after 1.768582626s: waiting for machine to come up
	I1211 23:59:02.429943  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:02.430433  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:02.430464  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:02.430369  106042 retry.go:31] will retry after 2.185532844s: waiting for machine to come up
	I1211 23:59:04.617032  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:04.617473  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:04.617499  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:04.617419  106042 retry.go:31] will retry after 4.346976865s: waiting for machine to come up
	I1211 23:59:08.969389  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:08.969741  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find current IP address of domain ha-565823 in network mk-ha-565823
	I1211 23:59:08.969760  106017 main.go:141] libmachine: (ha-565823) DBG | I1211 23:59:08.969711  106042 retry.go:31] will retry after 4.969601196s: waiting for machine to come up
	I1211 23:59:13.943658  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944048  106017 main.go:141] libmachine: (ha-565823) Found IP for machine: 192.168.39.19
	I1211 23:59:13.944063  106017 main.go:141] libmachine: (ha-565823) Reserving static IP address...
	I1211 23:59:13.944071  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has current primary IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:13.944392  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "ha-565823", mac: "52:54:00:2b:2e:da", ip: "192.168.39.19"} in network mk-ha-565823
	I1211 23:59:14.015315  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:14.015347  106017 main.go:141] libmachine: (ha-565823) Reserved static IP address: 192.168.39.19
	I1211 23:59:14.015425  106017 main.go:141] libmachine: (ha-565823) Waiting for SSH to be available...
	I1211 23:59:14.017689  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:14.018021  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823
	I1211 23:59:14.018050  106017 main.go:141] libmachine: (ha-565823) DBG | unable to find defined IP address of network mk-ha-565823 interface with MAC address 52:54:00:2b:2e:da
	I1211 23:59:14.018183  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:14.018223  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:14.018268  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:14.018288  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:14.018327  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:14.021958  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: exit status 255: 
	I1211 23:59:14.021983  106017 main.go:141] libmachine: (ha-565823) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1211 23:59:14.021992  106017 main.go:141] libmachine: (ha-565823) DBG | command : exit 0
	I1211 23:59:14.022004  106017 main.go:141] libmachine: (ha-565823) DBG | err     : exit status 255
	I1211 23:59:14.022014  106017 main.go:141] libmachine: (ha-565823) DBG | output  : 
	I1211 23:59:17.023677  106017 main.go:141] libmachine: (ha-565823) DBG | Getting to WaitForSSH function...
	I1211 23:59:17.026110  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026503  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.026529  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.026696  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH client type: external
	I1211 23:59:17.026723  106017 main.go:141] libmachine: (ha-565823) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa (-rw-------)
	I1211 23:59:17.026749  106017 main.go:141] libmachine: (ha-565823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1211 23:59:17.026776  106017 main.go:141] libmachine: (ha-565823) DBG | About to run SSH command:
	I1211 23:59:17.026792  106017 main.go:141] libmachine: (ha-565823) DBG | exit 0
	I1211 23:59:17.155941  106017 main.go:141] libmachine: (ha-565823) DBG | SSH cmd err, output: <nil>: 
	I1211 23:59:17.156245  106017 main.go:141] libmachine: (ha-565823) KVM machine creation complete!
	I1211 23:59:17.156531  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:17.157110  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157306  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:17.157460  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1211 23:59:17.157473  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:17.158855  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1211 23:59:17.158893  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1211 23:59:17.158902  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1211 23:59:17.158918  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.161015  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161305  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.161347  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.161435  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.161600  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161751  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.161869  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.162043  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.162241  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.162251  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1211 23:59:17.270900  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.270927  106017 main.go:141] libmachine: Detecting the provisioner...
	I1211 23:59:17.270938  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.273797  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274144  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.274170  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.274323  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.274499  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274631  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.274743  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.274871  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.275034  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.275045  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1211 23:59:17.388514  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1211 23:59:17.388598  106017 main.go:141] libmachine: found compatible host: buildroot
	I1211 23:59:17.388612  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1211 23:59:17.388622  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.388876  106017 buildroot.go:166] provisioning hostname "ha-565823"
	I1211 23:59:17.388901  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.389119  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.391763  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392089  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.392117  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.392206  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.392374  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392583  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.392750  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.392900  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.393085  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.393098  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823 && echo "ha-565823" | sudo tee /etc/hostname
	I1211 23:59:17.517872  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1211 23:59:17.517906  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.520794  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521115  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.521139  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.521316  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.521505  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521649  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.521748  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.521909  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.522131  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.522150  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1211 23:59:17.641444  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1211 23:59:17.641473  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1211 23:59:17.641523  106017 buildroot.go:174] setting up certificates
	I1211 23:59:17.641537  106017 provision.go:84] configureAuth start
	I1211 23:59:17.641550  106017 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1211 23:59:17.641858  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:17.644632  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.644929  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.644969  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.645145  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.647106  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647440  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.647460  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.647633  106017 provision.go:143] copyHostCerts
	I1211 23:59:17.647667  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647703  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1211 23:59:17.647712  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1211 23:59:17.647777  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1211 23:59:17.647854  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647873  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1211 23:59:17.647879  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1211 23:59:17.647903  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1211 23:59:17.647943  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647959  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1211 23:59:17.647965  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1211 23:59:17.647985  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1211 23:59:17.648036  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823 san=[127.0.0.1 192.168.39.19 ha-565823 localhost minikube]
	I1211 23:59:17.803088  106017 provision.go:177] copyRemoteCerts
	I1211 23:59:17.803154  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1211 23:59:17.803180  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.806065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806383  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.806401  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.806621  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.806836  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.806981  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.807172  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:17.894618  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1211 23:59:17.894691  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1211 23:59:17.921956  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1211 23:59:17.922023  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1211 23:59:17.948821  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1211 23:59:17.948890  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1211 23:59:17.975580  106017 provision.go:87] duration metric: took 334.027463ms to configureAuth
	I1211 23:59:17.975634  106017 buildroot.go:189] setting minikube options for container-runtime
	I1211 23:59:17.975827  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:17.975904  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:17.978577  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.978850  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:17.978901  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:17.979082  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:17.979257  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979385  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:17.979493  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:17.979692  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:17.979889  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:17.979912  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1211 23:59:18.235267  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1211 23:59:18.235313  106017 main.go:141] libmachine: Checking connection to Docker...
	I1211 23:59:18.235325  106017 main.go:141] libmachine: (ha-565823) Calling .GetURL
	I1211 23:59:18.236752  106017 main.go:141] libmachine: (ha-565823) DBG | Using libvirt version 6000000
	I1211 23:59:18.239115  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239502  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.239532  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.239731  106017 main.go:141] libmachine: Docker is up and running!
	I1211 23:59:18.239753  106017 main.go:141] libmachine: Reticulating splines...
	I1211 23:59:18.239771  106017 client.go:171] duration metric: took 28.270144196s to LocalClient.Create
	I1211 23:59:18.239864  106017 start.go:167] duration metric: took 28.27029823s to libmachine.API.Create "ha-565823"
	I1211 23:59:18.239885  106017 start.go:293] postStartSetup for "ha-565823" (driver="kvm2")
	I1211 23:59:18.239895  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1211 23:59:18.239917  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.240179  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1211 23:59:18.240211  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.242164  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242466  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.242493  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.242645  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.242832  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.242993  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.243119  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.330660  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1211 23:59:18.335424  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1211 23:59:18.335447  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1211 23:59:18.335503  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1211 23:59:18.335574  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1211 23:59:18.335584  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1211 23:59:18.335717  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1211 23:59:18.346001  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:18.374524  106017 start.go:296] duration metric: took 134.623519ms for postStartSetup
	I1211 23:59:18.374583  106017 main.go:141] libmachine: (ha-565823) Calling .GetConfigRaw
	I1211 23:59:18.375295  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.377900  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378234  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.378262  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.378516  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:18.378710  106017 start.go:128] duration metric: took 28.427447509s to createHost
	I1211 23:59:18.378738  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.380862  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381196  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.381220  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.381358  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.381537  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381691  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.381809  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.381919  106017 main.go:141] libmachine: Using SSH client type: native
	I1211 23:59:18.382120  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1211 23:59:18.382133  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1211 23:59:18.492450  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961558.472734336
	
	I1211 23:59:18.492473  106017 fix.go:216] guest clock: 1733961558.472734336
	I1211 23:59:18.492480  106017 fix.go:229] Guest: 2024-12-11 23:59:18.472734336 +0000 UTC Remote: 2024-12-11 23:59:18.378724497 +0000 UTC m=+28.540551547 (delta=94.009839ms)
	I1211 23:59:18.492521  106017 fix.go:200] guest clock delta is within tolerance: 94.009839ms
	I1211 23:59:18.492529  106017 start.go:83] releasing machines lock for "ha-565823", held for 28.541373742s
	I1211 23:59:18.492553  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.492820  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:18.495388  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495716  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.495743  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.495888  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496371  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496534  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:18.496615  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1211 23:59:18.496654  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.496714  106017 ssh_runner.go:195] Run: cat /version.json
	I1211 23:59:18.496740  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:18.499135  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499486  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499548  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499569  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499675  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.499845  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.499921  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:18.499961  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:18.499985  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500123  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:18.500135  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.500278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:18.500460  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:18.500604  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:18.607330  106017 ssh_runner.go:195] Run: systemctl --version
	I1211 23:59:18.613387  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1211 23:59:18.776622  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1211 23:59:18.783443  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1211 23:59:18.783538  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1211 23:59:18.799688  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1211 23:59:18.799713  106017 start.go:495] detecting cgroup driver to use...
	I1211 23:59:18.799774  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1211 23:59:18.816025  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1211 23:59:18.830854  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1211 23:59:18.830908  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1211 23:59:18.845980  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1211 23:59:18.860893  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1211 23:59:18.978441  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1211 23:59:19.134043  106017 docker.go:233] disabling docker service ...
	I1211 23:59:19.134112  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1211 23:59:19.149156  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1211 23:59:19.162275  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1211 23:59:19.283529  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1211 23:59:19.409189  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1211 23:59:19.423558  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1211 23:59:19.442528  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1211 23:59:19.442599  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.453566  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1211 23:59:19.453654  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.464397  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.475199  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.486049  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1211 23:59:19.497021  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.507803  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.524919  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1211 23:59:19.535844  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1211 23:59:19.545546  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1211 23:59:19.545598  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1211 23:59:19.559407  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1211 23:59:19.569383  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:19.689090  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1211 23:59:19.791744  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1211 23:59:19.791811  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1211 23:59:19.796877  106017 start.go:563] Will wait 60s for crictl version
	I1211 23:59:19.796945  106017 ssh_runner.go:195] Run: which crictl
	I1211 23:59:19.801083  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1211 23:59:19.845670  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1211 23:59:19.845758  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.875253  106017 ssh_runner.go:195] Run: crio --version
	I1211 23:59:19.904311  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1211 23:59:19.906690  106017 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1211 23:59:19.909356  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.909726  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:19.909755  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:19.910412  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1211 23:59:19.915735  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:19.929145  106017 kubeadm.go:883] updating cluster {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1211 23:59:19.929263  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:19.929323  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:19.962567  106017 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1211 23:59:19.962636  106017 ssh_runner.go:195] Run: which lz4
	I1211 23:59:19.966688  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1211 23:59:19.966797  106017 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1211 23:59:19.970897  106017 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1211 23:59:19.970929  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1211 23:59:21.360986  106017 crio.go:462] duration metric: took 1.394221262s to copy over tarball
	I1211 23:59:21.361088  106017 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1211 23:59:23.449972  106017 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.088850329s)
	I1211 23:59:23.450033  106017 crio.go:469] duration metric: took 2.08900198s to extract the tarball
	I1211 23:59:23.450045  106017 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1211 23:59:23.487452  106017 ssh_runner.go:195] Run: sudo crictl images --output json
	I1211 23:59:23.534823  106017 crio.go:514] all images are preloaded for cri-o runtime.
	I1211 23:59:23.534855  106017 cache_images.go:84] Images are preloaded, skipping loading
	I1211 23:59:23.534866  106017 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.2 crio true true} ...
	I1211 23:59:23.535012  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1211 23:59:23.535085  106017 ssh_runner.go:195] Run: crio config
	I1211 23:59:23.584878  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:23.584896  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:23.584905  106017 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1211 23:59:23.584925  106017 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565823 NodeName:ha-565823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1211 23:59:23.585039  106017 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1211 23:59:23.585064  106017 kube-vip.go:115] generating kube-vip config ...
	I1211 23:59:23.585112  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1211 23:59:23.603981  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1211 23:59:23.604115  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1211 23:59:23.604182  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1211 23:59:23.614397  106017 binaries.go:44] Found k8s binaries, skipping transfer
	I1211 23:59:23.614477  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1211 23:59:23.624289  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1211 23:59:23.641517  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1211 23:59:23.658716  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1211 23:59:23.675660  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1211 23:59:23.692530  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1211 23:59:23.696599  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1211 23:59:23.709445  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1211 23:59:23.845220  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1211 23:59:23.862954  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.19
	I1211 23:59:23.862981  106017 certs.go:194] generating shared ca certs ...
	I1211 23:59:23.863000  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:23.863207  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1211 23:59:23.863251  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1211 23:59:23.863262  106017 certs.go:256] generating profile certs ...
	I1211 23:59:23.863328  106017 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1211 23:59:23.863357  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt with IP's: []
	I1211 23:59:24.110700  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt ...
	I1211 23:59:24.110730  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt: {Name:mk50d526eb9350fec1f3c58be1ef98b2039770b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.110932  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key ...
	I1211 23:59:24.110948  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key: {Name:mk947a896656d347feed0e5ddd7c2c37edce03fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.111050  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c
	I1211 23:59:24.111082  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.254]
	I1211 23:59:24.333387  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c ...
	I1211 23:59:24.333420  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c: {Name:mkfc61798e61cb1d7ac0b35769a3179525ca368b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333599  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c ...
	I1211 23:59:24.333627  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c: {Name:mk4a04314c10f352160875e4af47370a91a0db88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.333740  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1211 23:59:24.333840  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.56854f9c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1211 23:59:24.333924  106017 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1211 23:59:24.333944  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt with IP's: []
	I1211 23:59:24.464961  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt ...
	I1211 23:59:24.464993  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt: {Name:mkbb1cf3b9047082cee6fcd6adaa9509e1729b3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465183  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key ...
	I1211 23:59:24.465203  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key: {Name:mkc9ec571078b7167489918f5cf8f1ea61967aea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:24.465319  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1211 23:59:24.465348  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1211 23:59:24.465364  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1211 23:59:24.465387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1211 23:59:24.465405  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1211 23:59:24.465422  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1211 23:59:24.465435  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1211 23:59:24.465452  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1211 23:59:24.465528  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1211 23:59:24.465577  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1211 23:59:24.465592  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1211 23:59:24.465634  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1211 23:59:24.465664  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1211 23:59:24.465695  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1211 23:59:24.465752  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1211 23:59:24.465790  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.465812  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.465831  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.466545  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1211 23:59:24.494141  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1211 23:59:24.519556  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1211 23:59:24.544702  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1211 23:59:24.569766  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1211 23:59:24.595380  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1211 23:59:24.621226  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1211 23:59:24.649860  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1211 23:59:24.698075  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1211 23:59:24.728714  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1211 23:59:24.753139  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1211 23:59:24.777957  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1211 23:59:24.796289  106017 ssh_runner.go:195] Run: openssl version
	I1211 23:59:24.802883  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1211 23:59:24.816553  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821741  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.821804  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1211 23:59:24.828574  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1211 23:59:24.840713  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1211 23:59:24.853013  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858281  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.858331  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1211 23:59:24.864829  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1211 23:59:24.875963  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1211 23:59:24.886500  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891673  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.891726  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1211 23:59:24.898344  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1211 23:59:24.910633  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1211 23:59:24.915220  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1211 23:59:24.915279  106017 kubeadm.go:392] StartCluster: {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:59:24.915383  106017 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1211 23:59:24.915454  106017 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1211 23:59:24.954743  106017 cri.go:89] found id: ""
	I1211 23:59:24.954813  106017 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1211 23:59:24.965887  106017 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1211 23:59:24.975963  106017 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1211 23:59:24.985759  106017 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1211 23:59:24.985784  106017 kubeadm.go:157] found existing configuration files:
	
	I1211 23:59:24.985837  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1211 23:59:24.995322  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1211 23:59:24.995387  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1211 23:59:25.005782  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1211 23:59:25.015121  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1211 23:59:25.015216  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1211 23:59:25.024739  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.033898  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1211 23:59:25.033949  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1211 23:59:25.043527  106017 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1211 23:59:25.052795  106017 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1211 23:59:25.052860  106017 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1211 23:59:25.063719  106017 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1211 23:59:25.172138  106017 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1211 23:59:25.172231  106017 kubeadm.go:310] [preflight] Running pre-flight checks
	I1211 23:59:25.282095  106017 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1211 23:59:25.282220  106017 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1211 23:59:25.282346  106017 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1211 23:59:25.292987  106017 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1211 23:59:25.507248  106017 out.go:235]   - Generating certificates and keys ...
	I1211 23:59:25.507374  106017 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1211 23:59:25.507500  106017 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1211 23:59:25.628233  106017 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1211 23:59:25.895094  106017 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1211 23:59:26.195266  106017 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1211 23:59:26.355531  106017 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1211 23:59:26.415298  106017 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1211 23:59:26.415433  106017 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.603280  106017 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1211 23:59:26.603516  106017 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-565823 localhost] and IPs [192.168.39.19 127.0.0.1 ::1]
	I1211 23:59:26.737544  106017 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1211 23:59:26.938736  106017 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1211 23:59:27.118447  106017 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1211 23:59:27.118579  106017 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1211 23:59:27.214058  106017 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1211 23:59:27.283360  106017 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1211 23:59:27.437118  106017 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1211 23:59:27.583693  106017 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1211 23:59:27.738001  106017 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1211 23:59:27.738673  106017 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1211 23:59:27.741933  106017 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1211 23:59:27.743702  106017 out.go:235]   - Booting up control plane ...
	I1211 23:59:27.743844  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1211 23:59:27.744424  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1211 23:59:27.746935  106017 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1211 23:59:27.765392  106017 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1211 23:59:27.772566  106017 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1211 23:59:27.772699  106017 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1211 23:59:27.925671  106017 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1211 23:59:27.925813  106017 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1211 23:59:28.450340  106017 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 524.075614ms
	I1211 23:59:28.450451  106017 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1211 23:59:34.524805  106017 kubeadm.go:310] [api-check] The API server is healthy after 6.076898322s
	I1211 23:59:34.537381  106017 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1211 23:59:34.553285  106017 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1211 23:59:35.079814  106017 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1211 23:59:35.080057  106017 kubeadm.go:310] [mark-control-plane] Marking the node ha-565823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1211 23:59:35.095582  106017 kubeadm.go:310] [bootstrap-token] Using token: lktsit.hvyjnx8elfe20z7f
	I1211 23:59:35.097027  106017 out.go:235]   - Configuring RBAC rules ...
	I1211 23:59:35.097177  106017 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1211 23:59:35.101780  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1211 23:59:35.113593  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1211 23:59:35.118164  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1211 23:59:35.121511  106017 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1211 23:59:35.125148  106017 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1211 23:59:35.144131  106017 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1211 23:59:35.407109  106017 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1211 23:59:35.930699  106017 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1211 23:59:35.931710  106017 kubeadm.go:310] 
	I1211 23:59:35.931771  106017 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1211 23:59:35.931775  106017 kubeadm.go:310] 
	I1211 23:59:35.931851  106017 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1211 23:59:35.931859  106017 kubeadm.go:310] 
	I1211 23:59:35.931880  106017 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1211 23:59:35.931927  106017 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1211 23:59:35.931982  106017 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1211 23:59:35.932000  106017 kubeadm.go:310] 
	I1211 23:59:35.932049  106017 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1211 23:59:35.932058  106017 kubeadm.go:310] 
	I1211 23:59:35.932118  106017 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1211 23:59:35.932126  106017 kubeadm.go:310] 
	I1211 23:59:35.932168  106017 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1211 23:59:35.932259  106017 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1211 23:59:35.932333  106017 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1211 23:59:35.932350  106017 kubeadm.go:310] 
	I1211 23:59:35.932432  106017 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1211 23:59:35.932499  106017 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1211 23:59:35.932506  106017 kubeadm.go:310] 
	I1211 23:59:35.932579  106017 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.932666  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1211 23:59:35.932687  106017 kubeadm.go:310] 	--control-plane 
	I1211 23:59:35.932692  106017 kubeadm.go:310] 
	I1211 23:59:35.932780  106017 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1211 23:59:35.932793  106017 kubeadm.go:310] 
	I1211 23:59:35.932900  106017 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lktsit.hvyjnx8elfe20z7f \
	I1211 23:59:35.933031  106017 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1211 23:59:35.933914  106017 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1211 23:59:35.934034  106017 cni.go:84] Creating CNI manager for ""
	I1211 23:59:35.934056  106017 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1211 23:59:35.936050  106017 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1211 23:59:35.937506  106017 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1211 23:59:35.943577  106017 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1211 23:59:35.943610  106017 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1211 23:59:35.964609  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1211 23:59:36.354699  106017 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1211 23:59:36.354799  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:36.354832  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823 minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=true
	I1211 23:59:36.386725  106017 ops.go:34] apiserver oom_adj: -16
	I1211 23:59:36.511318  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.011972  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:37.511719  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.012059  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:38.511637  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.012451  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:39.512222  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.012218  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.512204  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1211 23:59:40.605442  106017 kubeadm.go:1113] duration metric: took 4.250718988s to wait for elevateKubeSystemPrivileges
	I1211 23:59:40.605479  106017 kubeadm.go:394] duration metric: took 15.690206878s to StartCluster
	I1211 23:59:40.605505  106017 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.605593  106017 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.606578  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1211 23:59:40.606860  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1211 23:59:40.606860  106017 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:40.606883  106017 start.go:241] waiting for startup goroutines ...
	I1211 23:59:40.606899  106017 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1211 23:59:40.606982  106017 addons.go:69] Setting storage-provisioner=true in profile "ha-565823"
	I1211 23:59:40.606989  106017 addons.go:69] Setting default-storageclass=true in profile "ha-565823"
	I1211 23:59:40.607004  106017 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565823"
	I1211 23:59:40.607018  106017 addons.go:234] Setting addon storage-provisioner=true in "ha-565823"
	I1211 23:59:40.607045  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.607426  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607469  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.607635  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:40.607793  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.607838  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.622728  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I1211 23:59:40.622807  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43751
	I1211 23:59:40.623266  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623370  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.623966  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.623993  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624004  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.624015  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.624390  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624398  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.624567  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.624920  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.624961  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.626695  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:59:40.627009  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1211 23:59:40.627499  106017 cert_rotation.go:140] Starting client certificate rotation controller
	I1211 23:59:40.627813  106017 addons.go:234] Setting addon default-storageclass=true in "ha-565823"
	I1211 23:59:40.627859  106017 host.go:66] Checking if "ha-565823" exists ...
	I1211 23:59:40.628133  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.628177  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.640869  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32899
	I1211 23:59:40.641437  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.642016  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.642043  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.642434  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.642635  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.643106  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38535
	I1211 23:59:40.643674  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.644240  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.644275  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.644588  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.644640  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.645087  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:40.645136  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:40.646489  106017 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1211 23:59:40.647996  106017 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.648015  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1211 23:59:40.648030  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.651165  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651679  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.651703  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.651939  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.652136  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.652353  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.652515  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.661089  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44101
	I1211 23:59:40.661521  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:40.661949  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:40.661970  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:40.662302  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:40.662464  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1211 23:59:40.664023  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1211 23:59:40.664204  106017 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:40.664219  106017 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1211 23:59:40.664234  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1211 23:59:40.666799  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667194  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1211 23:59:40.667218  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1211 23:59:40.667366  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1211 23:59:40.667518  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1211 23:59:40.667676  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1211 23:59:40.667787  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1211 23:59:40.766556  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1211 23:59:40.838934  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1211 23:59:40.853931  106017 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1211 23:59:41.384410  106017 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1211 23:59:41.687789  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.687839  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688024  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688044  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688143  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688158  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688166  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688175  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688183  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688295  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688316  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.688337  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.688398  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.688424  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.688407  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.688511  106017 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1211 23:59:41.688531  106017 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1211 23:59:41.688635  106017 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1211 23:59:41.688642  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.688654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.688660  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.689067  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.689084  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.689112  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.703120  106017 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1211 23:59:41.703858  106017 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1211 23:59:41.703876  106017 round_trippers.go:469] Request Headers:
	I1211 23:59:41.703888  106017 round_trippers.go:473]     Content-Type: application/json
	I1211 23:59:41.703896  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1211 23:59:41.703902  106017 round_trippers.go:473]     Accept: application/json, */*
	I1211 23:59:41.707451  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1211 23:59:41.707880  106017 main.go:141] libmachine: Making call to close driver server
	I1211 23:59:41.707905  106017 main.go:141] libmachine: (ha-565823) Calling .Close
	I1211 23:59:41.708200  106017 main.go:141] libmachine: (ha-565823) DBG | Closing plugin on server side
	I1211 23:59:41.708289  106017 main.go:141] libmachine: Successfully made call to close driver server
	I1211 23:59:41.708309  106017 main.go:141] libmachine: Making call to close connection to plugin binary
	I1211 23:59:41.710098  106017 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1211 23:59:41.711624  106017 addons.go:510] duration metric: took 1.104728302s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1211 23:59:41.711657  106017 start.go:246] waiting for cluster config update ...
	I1211 23:59:41.711669  106017 start.go:255] writing updated cluster config ...
	I1211 23:59:41.713334  106017 out.go:201] 
	I1211 23:59:41.714788  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:59:41.714856  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.716555  106017 out.go:177] * Starting "ha-565823-m02" control-plane node in "ha-565823" cluster
	I1211 23:59:41.717794  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:59:41.717815  106017 cache.go:56] Caching tarball of preloaded images
	I1211 23:59:41.717923  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1211 23:59:41.717935  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1211 23:59:41.717999  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1211 23:59:41.718156  106017 start.go:360] acquireMachinesLock for ha-565823-m02: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1211 23:59:41.718199  106017 start.go:364] duration metric: took 25.794µs to acquireMachinesLock for "ha-565823-m02"
	I1211 23:59:41.718224  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1211 23:59:41.718291  106017 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1211 23:59:41.719692  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1211 23:59:41.719777  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:59:41.719812  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:59:41.734465  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1211 23:59:41.734950  106017 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:59:41.735455  106017 main.go:141] libmachine: Using API Version  1
	I1211 23:59:41.735478  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:59:41.735843  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:59:41.736006  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1211 23:59:41.736149  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1211 23:59:41.736349  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1211 23:59:41.736395  106017 client.go:168] LocalClient.Create starting
	I1211 23:59:41.736425  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1211 23:59:41.736455  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736469  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736519  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1211 23:59:41.736537  106017 main.go:141] libmachine: Decoding PEM data...
	I1211 23:59:41.736547  106017 main.go:141] libmachine: Parsing certificate...
	I1211 23:59:41.736559  106017 main.go:141] libmachine: Running pre-create checks...
	I1211 23:59:41.736567  106017 main.go:141] libmachine: (ha-565823-m02) Calling .PreCreateCheck
	I1211 23:59:41.736735  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1211 23:59:41.737076  106017 main.go:141] libmachine: Creating machine...
	I1211 23:59:41.737091  106017 main.go:141] libmachine: (ha-565823-m02) Calling .Create
	I1211 23:59:41.737203  106017 main.go:141] libmachine: (ha-565823-m02) Creating KVM machine...
	I1211 23:59:41.738412  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing default KVM network
	I1211 23:59:41.738502  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found existing private KVM network mk-ha-565823
	I1211 23:59:41.738691  106017 main.go:141] libmachine: (ha-565823-m02) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:41.738735  106017 main.go:141] libmachine: (ha-565823-m02) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:59:41.738778  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:41.738685  106399 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:41.738888  106017 main.go:141] libmachine: (ha-565823-m02) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1211 23:59:42.010827  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.010671  106399 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa...
	I1211 23:59:42.081269  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081125  106399 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk...
	I1211 23:59:42.081297  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing magic tar header
	I1211 23:59:42.081315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Writing SSH key tar header
	I1211 23:59:42.081327  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:42.081241  106399 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 ...
	I1211 23:59:42.081337  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02
	I1211 23:59:42.081349  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1211 23:59:42.081395  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02 (perms=drwx------)
	I1211 23:59:42.081428  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1211 23:59:42.081445  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:59:42.081465  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1211 23:59:42.081477  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1211 23:59:42.081489  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home/jenkins
	I1211 23:59:42.081497  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Checking permissions on dir: /home
	I1211 23:59:42.081510  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1211 23:59:42.081524  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1211 23:59:42.081536  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Skipping /home - not owner
	I1211 23:59:42.081553  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1211 23:59:42.081564  106017 main.go:141] libmachine: (ha-565823-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1211 23:59:42.081577  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:42.082570  106017 main.go:141] libmachine: (ha-565823-m02) define libvirt domain using xml: 
	I1211 23:59:42.082593  106017 main.go:141] libmachine: (ha-565823-m02) <domain type='kvm'>
	I1211 23:59:42.082600  106017 main.go:141] libmachine: (ha-565823-m02)   <name>ha-565823-m02</name>
	I1211 23:59:42.082605  106017 main.go:141] libmachine: (ha-565823-m02)   <memory unit='MiB'>2200</memory>
	I1211 23:59:42.082610  106017 main.go:141] libmachine: (ha-565823-m02)   <vcpu>2</vcpu>
	I1211 23:59:42.082618  106017 main.go:141] libmachine: (ha-565823-m02)   <features>
	I1211 23:59:42.082626  106017 main.go:141] libmachine: (ha-565823-m02)     <acpi/>
	I1211 23:59:42.082641  106017 main.go:141] libmachine: (ha-565823-m02)     <apic/>
	I1211 23:59:42.082671  106017 main.go:141] libmachine: (ha-565823-m02)     <pae/>
	I1211 23:59:42.082693  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.082705  106017 main.go:141] libmachine: (ha-565823-m02)   </features>
	I1211 23:59:42.082719  106017 main.go:141] libmachine: (ha-565823-m02)   <cpu mode='host-passthrough'>
	I1211 23:59:42.082728  106017 main.go:141] libmachine: (ha-565823-m02)   
	I1211 23:59:42.082736  106017 main.go:141] libmachine: (ha-565823-m02)   </cpu>
	I1211 23:59:42.082744  106017 main.go:141] libmachine: (ha-565823-m02)   <os>
	I1211 23:59:42.082754  106017 main.go:141] libmachine: (ha-565823-m02)     <type>hvm</type>
	I1211 23:59:42.082761  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='cdrom'/>
	I1211 23:59:42.082771  106017 main.go:141] libmachine: (ha-565823-m02)     <boot dev='hd'/>
	I1211 23:59:42.082779  106017 main.go:141] libmachine: (ha-565823-m02)     <bootmenu enable='no'/>
	I1211 23:59:42.082792  106017 main.go:141] libmachine: (ha-565823-m02)   </os>
	I1211 23:59:42.082803  106017 main.go:141] libmachine: (ha-565823-m02)   <devices>
	I1211 23:59:42.082811  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='cdrom'>
	I1211 23:59:42.082828  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/boot2docker.iso'/>
	I1211 23:59:42.082836  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hdc' bus='scsi'/>
	I1211 23:59:42.082847  106017 main.go:141] libmachine: (ha-565823-m02)       <readonly/>
	I1211 23:59:42.082857  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082887  106017 main.go:141] libmachine: (ha-565823-m02)     <disk type='file' device='disk'>
	I1211 23:59:42.082908  106017 main.go:141] libmachine: (ha-565823-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1211 23:59:42.082928  106017 main.go:141] libmachine: (ha-565823-m02)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/ha-565823-m02.rawdisk'/>
	I1211 23:59:42.082944  106017 main.go:141] libmachine: (ha-565823-m02)       <target dev='hda' bus='virtio'/>
	I1211 23:59:42.082957  106017 main.go:141] libmachine: (ha-565823-m02)     </disk>
	I1211 23:59:42.082968  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.082978  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='mk-ha-565823'/>
	I1211 23:59:42.082985  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.082990  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.082997  106017 main.go:141] libmachine: (ha-565823-m02)     <interface type='network'>
	I1211 23:59:42.083003  106017 main.go:141] libmachine: (ha-565823-m02)       <source network='default'/>
	I1211 23:59:42.083012  106017 main.go:141] libmachine: (ha-565823-m02)       <model type='virtio'/>
	I1211 23:59:42.083025  106017 main.go:141] libmachine: (ha-565823-m02)     </interface>
	I1211 23:59:42.083038  106017 main.go:141] libmachine: (ha-565823-m02)     <serial type='pty'>
	I1211 23:59:42.083047  106017 main.go:141] libmachine: (ha-565823-m02)       <target port='0'/>
	I1211 23:59:42.083054  106017 main.go:141] libmachine: (ha-565823-m02)     </serial>
	I1211 23:59:42.083065  106017 main.go:141] libmachine: (ha-565823-m02)     <console type='pty'>
	I1211 23:59:42.083077  106017 main.go:141] libmachine: (ha-565823-m02)       <target type='serial' port='0'/>
	I1211 23:59:42.083089  106017 main.go:141] libmachine: (ha-565823-m02)     </console>
	I1211 23:59:42.083098  106017 main.go:141] libmachine: (ha-565823-m02)     <rng model='virtio'>
	I1211 23:59:42.083112  106017 main.go:141] libmachine: (ha-565823-m02)       <backend model='random'>/dev/random</backend>
	I1211 23:59:42.083126  106017 main.go:141] libmachine: (ha-565823-m02)     </rng>
	I1211 23:59:42.083154  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083172  106017 main.go:141] libmachine: (ha-565823-m02)     
	I1211 23:59:42.083184  106017 main.go:141] libmachine: (ha-565823-m02)   </devices>
	I1211 23:59:42.083193  106017 main.go:141] libmachine: (ha-565823-m02) </domain>
	I1211 23:59:42.083206  106017 main.go:141] libmachine: (ha-565823-m02) 
	I1211 23:59:42.090031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:4e:60:e6 in network default
	I1211 23:59:42.090722  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring networks are active...
	I1211 23:59:42.090744  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:42.091386  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network default is active
	I1211 23:59:42.091728  106017 main.go:141] libmachine: (ha-565823-m02) Ensuring network mk-ha-565823 is active
	I1211 23:59:42.092172  106017 main.go:141] libmachine: (ha-565823-m02) Getting domain xml...
	I1211 23:59:42.092821  106017 main.go:141] libmachine: (ha-565823-m02) Creating domain...
	I1211 23:59:43.306722  106017 main.go:141] libmachine: (ha-565823-m02) Waiting to get IP...
	I1211 23:59:43.307541  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.307970  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.308021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.307943  106399 retry.go:31] will retry after 188.292611ms: waiting for machine to come up
	I1211 23:59:43.498538  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.498980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.499007  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.498936  106399 retry.go:31] will retry after 383.283577ms: waiting for machine to come up
	I1211 23:59:43.883676  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:43.884158  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:43.884186  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:43.884123  106399 retry.go:31] will retry after 368.673726ms: waiting for machine to come up
	I1211 23:59:44.254720  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.255182  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.255205  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.255142  106399 retry.go:31] will retry after 403.445822ms: waiting for machine to come up
	I1211 23:59:44.660664  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:44.661153  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:44.661178  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:44.661074  106399 retry.go:31] will retry after 718.942978ms: waiting for machine to come up
	I1211 23:59:45.382183  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:45.382736  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:45.382761  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:45.382694  106399 retry.go:31] will retry after 941.806671ms: waiting for machine to come up
	I1211 23:59:46.326070  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:46.326533  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:46.326566  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:46.326481  106399 retry.go:31] will retry after 1.01864437s: waiting for machine to come up
	I1211 23:59:47.347315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:47.347790  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:47.347812  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:47.347737  106399 retry.go:31] will retry after 1.213138s: waiting for machine to come up
	I1211 23:59:48.562238  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:48.562705  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:48.562737  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:48.562658  106399 retry.go:31] will retry after 1.846591325s: waiting for machine to come up
	I1211 23:59:50.410650  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:50.411116  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:50.411143  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:50.411072  106399 retry.go:31] will retry after 2.02434837s: waiting for machine to come up
	I1211 23:59:52.436763  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:52.437247  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:52.437276  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:52.437194  106399 retry.go:31] will retry after 1.785823174s: waiting for machine to come up
	I1211 23:59:54.224640  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:54.224948  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:54.224975  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:54.224901  106399 retry.go:31] will retry after 2.203569579s: waiting for machine to come up
	I1211 23:59:56.431378  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1211 23:59:56.431904  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1211 23:59:56.431933  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1211 23:59:56.431858  106399 retry.go:31] will retry after 3.94903919s: waiting for machine to come up
	I1212 00:00:00.384703  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:00.385175  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find current IP address of domain ha-565823-m02 in network mk-ha-565823
	I1212 00:00:00.385208  106017 main.go:141] libmachine: (ha-565823-m02) DBG | I1212 00:00:00.385121  106399 retry.go:31] will retry after 3.809627495s: waiting for machine to come up
	I1212 00:00:04.197607  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198181  106017 main.go:141] libmachine: (ha-565823-m02) Found IP for machine: 192.168.39.103
	I1212 00:00:04.198204  106017 main.go:141] libmachine: (ha-565823-m02) Reserving static IP address...
	I1212 00:00:04.198220  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has current primary IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.198616  106017 main.go:141] libmachine: (ha-565823-m02) DBG | unable to find host DHCP lease matching {name: "ha-565823-m02", mac: "52:54:00:cc:31:80", ip: "192.168.39.103"} in network mk-ha-565823
	I1212 00:00:04.273114  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Getting to WaitForSSH function...
	I1212 00:00:04.273143  106017 main.go:141] libmachine: (ha-565823-m02) Reserved static IP address: 192.168.39.103
	I1212 00:00:04.273155  106017 main.go:141] libmachine: (ha-565823-m02) Waiting for SSH to be available...
	I1212 00:00:04.275998  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276409  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.276438  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.276561  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH client type: external
	I1212 00:00:04.276592  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa (-rw-------)
	I1212 00:00:04.276623  106017 main.go:141] libmachine: (ha-565823-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:00:04.276639  106017 main.go:141] libmachine: (ha-565823-m02) DBG | About to run SSH command:
	I1212 00:00:04.276655  106017 main.go:141] libmachine: (ha-565823-m02) DBG | exit 0
	I1212 00:00:04.400102  106017 main.go:141] libmachine: (ha-565823-m02) DBG | SSH cmd err, output: <nil>: 
	I1212 00:00:04.400348  106017 main.go:141] libmachine: (ha-565823-m02) KVM machine creation complete!
	I1212 00:00:04.400912  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:04.401484  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401664  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:04.401821  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:00:04.401837  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetState
	I1212 00:00:04.403174  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:00:04.403192  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:00:04.403199  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:00:04.403208  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.405388  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405786  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.405820  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.405928  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.406109  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406313  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.406472  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.406636  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.406846  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.406860  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:00:04.507379  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.507409  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:00:04.507426  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.510219  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510595  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.510633  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.510776  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.511014  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511172  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.511323  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.511507  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.511752  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.511765  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:00:04.612413  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:00:04.612516  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:00:04.612530  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:00:04.612538  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.612840  106017 buildroot.go:166] provisioning hostname "ha-565823-m02"
	I1212 00:00:04.612874  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.613079  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.615872  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616272  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.616326  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.616447  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.616621  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616780  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.616976  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.617134  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.617294  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.617306  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m02 && echo "ha-565823-m02" | sudo tee /etc/hostname
	I1212 00:00:04.736911  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m02
	
	I1212 00:00:04.736949  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.739899  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740287  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.740321  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.740530  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:04.740723  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.740885  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:04.741022  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:04.741259  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:04.741462  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:04.741481  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:00:04.854133  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:00:04.854171  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:00:04.854189  106017 buildroot.go:174] setting up certificates
	I1212 00:00:04.854199  106017 provision.go:84] configureAuth start
	I1212 00:00:04.854213  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetMachineName
	I1212 00:00:04.854617  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:04.858031  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858466  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.858492  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.858772  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:04.860980  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861315  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:04.861344  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:04.861482  106017 provision.go:143] copyHostCerts
	I1212 00:00:04.861512  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861546  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:00:04.861556  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:00:04.861621  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:00:04.861699  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861718  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:00:04.861725  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:00:04.861748  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:00:04.861792  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861809  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:00:04.861815  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:00:04.861836  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:00:04.861892  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m02 san=[127.0.0.1 192.168.39.103 ha-565823-m02 localhost minikube]
	I1212 00:00:05.017387  106017 provision.go:177] copyRemoteCerts
	I1212 00:00:05.017447  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:00:05.017475  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.020320  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020751  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.020781  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.020994  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.021285  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.021461  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.021631  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.103134  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:00:05.103225  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:00:05.128318  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:00:05.128392  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:00:05.152814  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:00:05.152893  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:00:05.177479  106017 provision.go:87] duration metric: took 323.264224ms to configureAuth
	I1212 00:00:05.177509  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:00:05.177674  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:05.177748  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.180791  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181249  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.181280  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.181463  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.181702  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.181870  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.182010  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.182176  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.182341  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.182357  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:00:05.417043  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:00:05.417067  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:00:05.417075  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetURL
	I1212 00:00:05.418334  106017 main.go:141] libmachine: (ha-565823-m02) DBG | Using libvirt version 6000000
	I1212 00:00:05.420596  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.420905  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.420938  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.421114  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:00:05.421129  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:00:05.421139  106017 client.go:171] duration metric: took 23.684732891s to LocalClient.Create
	I1212 00:00:05.421170  106017 start.go:167] duration metric: took 23.684823561s to libmachine.API.Create "ha-565823"
	I1212 00:00:05.421183  106017 start.go:293] postStartSetup for "ha-565823-m02" (driver="kvm2")
	I1212 00:00:05.421197  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:00:05.421214  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.421468  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:00:05.421495  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.424694  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425050  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.425083  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.425238  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.425449  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.425599  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.425739  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.506562  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:00:05.511891  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:00:05.511921  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:00:05.512000  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:00:05.512114  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:00:05.512128  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:00:05.512236  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:00:05.525426  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:05.552318  106017 start.go:296] duration metric: took 131.1154ms for postStartSetup
	I1212 00:00:05.552386  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetConfigRaw
	I1212 00:00:05.553038  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.556173  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556661  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.556704  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.556972  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:05.557179  106017 start.go:128] duration metric: took 23.838875142s to createHost
	I1212 00:00:05.557206  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.559644  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560000  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.560021  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.560242  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.560469  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560659  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.560833  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.561033  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:00:05.561234  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1212 00:00:05.561248  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:00:05.664479  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961605.636878321
	
	I1212 00:00:05.664504  106017 fix.go:216] guest clock: 1733961605.636878321
	I1212 00:00:05.664511  106017 fix.go:229] Guest: 2024-12-12 00:00:05.636878321 +0000 UTC Remote: 2024-12-12 00:00:05.557193497 +0000 UTC m=+75.719020541 (delta=79.684824ms)
	I1212 00:00:05.664529  106017 fix.go:200] guest clock delta is within tolerance: 79.684824ms
	I1212 00:00:05.664536  106017 start.go:83] releasing machines lock for "ha-565823-m02", held for 23.946326821s
	I1212 00:00:05.664559  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.664834  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:05.667309  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.667587  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.667625  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.670169  106017 out.go:177] * Found network options:
	I1212 00:00:05.671775  106017 out.go:177]   - NO_PROXY=192.168.39.19
	W1212 00:00:05.673420  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.673451  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.673974  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674184  106017 main.go:141] libmachine: (ha-565823-m02) Calling .DriverName
	I1212 00:00:05.674310  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:00:05.674362  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	W1212 00:00:05.674404  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:00:05.674488  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:00:05.674510  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHHostname
	I1212 00:00:05.677209  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677558  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.677588  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677632  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.677782  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.677967  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678067  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:05.678094  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:05.678133  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678286  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHPort
	I1212 00:00:05.678288  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.678440  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHKeyPath
	I1212 00:00:05.678560  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetSSHUsername
	I1212 00:00:05.678668  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m02/id_rsa Username:docker}
	I1212 00:00:05.906824  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:00:05.913945  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:00:05.914026  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:00:05.931775  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:00:05.931797  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:00:05.931857  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:00:05.948556  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:00:05.963326  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:00:05.963397  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:00:05.978208  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:00:05.992483  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:00:06.103988  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:00:06.275509  106017 docker.go:233] disabling docker service ...
	I1212 00:00:06.275580  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:00:06.293042  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:00:06.306048  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:00:06.431702  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:00:06.557913  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:00:06.573066  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:00:06.592463  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:00:06.592536  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.604024  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:00:06.604087  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.615267  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.626194  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.637083  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:00:06.648061  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.659477  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.677134  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:00:06.687875  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:00:06.701376  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:00:06.701451  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:00:06.714621  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:00:06.724651  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:06.844738  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:00:06.941123  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:00:06.941186  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:00:06.946025  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:00:06.946103  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:00:06.950454  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:00:06.989220  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:00:06.989302  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.018407  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:00:07.049375  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:00:07.051430  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:00:07.052588  106017 main.go:141] libmachine: (ha-565823-m02) Calling .GetIP
	I1212 00:00:07.055087  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055359  106017 main.go:141] libmachine: (ha-565823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:31:80", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:57 +0000 UTC Type:0 Mac:52:54:00:cc:31:80 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-565823-m02 Clientid:01:52:54:00:cc:31:80}
	I1212 00:00:07.055377  106017 main.go:141] libmachine: (ha-565823-m02) DBG | domain ha-565823-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:cc:31:80 in network mk-ha-565823
	I1212 00:00:07.055577  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:00:07.059718  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:07.072121  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:00:07.072328  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:07.072649  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.072692  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.087345  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36461
	I1212 00:00:07.087790  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.088265  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.088285  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.088623  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.088818  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:00:07.090394  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:07.090786  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:07.090832  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:07.107441  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41599
	I1212 00:00:07.107836  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:07.108308  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:07.108327  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:07.108632  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:07.108786  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:07.108915  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.103
	I1212 00:00:07.108926  106017 certs.go:194] generating shared ca certs ...
	I1212 00:00:07.108939  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.109062  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:00:07.109105  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:00:07.109114  106017 certs.go:256] generating profile certs ...
	I1212 00:00:07.109178  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:00:07.109202  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc
	I1212 00:00:07.109217  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.254]
	I1212 00:00:07.203114  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc ...
	I1212 00:00:07.203150  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc: {Name:mk3a75c055b0a829a056d90903c78ae5decf9bac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203349  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc ...
	I1212 00:00:07.203372  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc: {Name:mkce850d5486843203391b76609d5fd65c614c5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:00:07.203468  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:00:07.203647  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.1e03bbcc -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:00:07.203815  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:00:07.203836  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:00:07.203855  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:00:07.203870  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:00:07.203891  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:00:07.203909  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:00:07.203931  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:00:07.203949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:00:07.203968  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:00:07.204035  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:00:07.204078  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:00:07.204113  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:00:07.204170  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:00:07.204217  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:00:07.204255  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:00:07.204310  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:00:07.204351  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.204383  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.204402  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.204445  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:07.207043  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207413  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:07.207439  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:07.207647  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:07.207863  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:07.208027  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:07.208177  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:07.288012  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:00:07.293204  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:00:07.304789  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:00:07.310453  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:00:07.321124  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:00:07.326057  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:00:07.337737  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:00:07.342691  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:00:07.354806  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:00:07.359143  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:00:07.371799  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:00:07.376295  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:00:07.387705  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:00:07.415288  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:00:07.440414  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:00:07.466177  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:00:07.490907  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 00:00:07.517228  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:00:07.542858  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:00:07.567465  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:00:07.592181  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:00:07.616218  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:00:07.641063  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:00:07.665682  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:00:07.683443  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:00:07.700820  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:00:07.718283  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:00:07.735173  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:00:07.752079  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:00:07.770479  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:00:07.789102  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:00:07.795248  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:00:07.806811  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811750  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.811816  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:00:07.818034  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:00:07.829409  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:00:07.840952  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845782  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.845853  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:00:07.851849  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:00:07.863158  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:00:07.875091  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880111  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.880173  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:00:07.886325  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:00:07.897750  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:00:07.902056  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:00:07.902131  106017 kubeadm.go:934] updating node {m02 192.168.39.103 8443 v1.31.2 crio true true} ...
	I1212 00:00:07.902244  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:00:07.902279  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:00:07.902323  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:00:07.920010  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:00:07.920099  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:00:07.920166  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.930159  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:00:07.930221  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:00:07.939751  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:00:07.939776  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939831  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:00:07.939835  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1212 00:00:07.939861  106017 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1212 00:00:07.944054  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:00:07.944086  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:00:09.149265  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:09.168056  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.168181  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:00:09.173566  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:00:09.173601  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:00:09.219150  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.219238  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:00:09.234545  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:00:09.234589  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:00:09.726465  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:00:09.736811  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1212 00:00:09.753799  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:00:09.771455  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:00:09.789916  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:00:09.794008  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:00:09.807290  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:09.944370  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:09.973225  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:00:09.973893  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:09.973959  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:09.989196  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I1212 00:00:09.989723  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:09.990363  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:09.990386  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:09.990735  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:09.990931  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:00:09.991104  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:00:09.991225  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:00:09.991249  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:00:09.994437  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995018  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:00:09.995065  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:00:09.995202  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:00:09.995448  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:00:09.995585  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:00:09.995765  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:00:10.156968  106017 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:10.157029  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443"
	I1212 00:00:31.347275  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token huaiy2.jqx4ang4teqw9q83 --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443": (21.190211224s)
	I1212 00:00:31.347321  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:00:31.826934  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m02 minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:00:32.001431  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:00:32.141631  106017 start.go:319] duration metric: took 22.150523355s to joinCluster
	I1212 00:00:32.141725  106017 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:32.141997  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:32.143552  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:00:32.145227  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:00:32.332043  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:00:32.348508  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:00:32.348864  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:00:32.348951  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:00:32.349295  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:32.349423  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.349436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.349449  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.349460  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.362203  106017 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1212 00:00:32.850412  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:32.850436  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:32.850447  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:32.850455  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:32.854786  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.349683  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.349718  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.354356  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:33.849742  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:33.849766  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:33.849774  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:33.849778  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:33.854313  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.350516  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.350539  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.350547  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.350551  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.355023  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:34.355775  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:34.850173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:34.850197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:34.850206  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:34.850210  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:34.853276  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.350529  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.350560  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.350568  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.350574  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.354219  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:35.850352  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:35.850378  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:35.850386  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:35.850391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:35.853507  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.349531  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.349555  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.349566  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.349572  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.353110  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:36.849604  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:36.849629  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:36.849640  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:36.849645  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:36.856046  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:36.856697  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:37.349961  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.349980  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.349989  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.349993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.354377  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:37.849622  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:37.849647  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:37.849660  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:37.849665  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:37.853494  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:38.349611  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.349641  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.349654  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.349686  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.354211  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:38.850399  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:38.850424  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:38.850434  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:38.850440  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:38.854312  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.350249  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.350275  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.350288  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.350293  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.354293  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:39.355152  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:39.849553  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:39.849578  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:39.849587  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:39.849592  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:39.854321  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:40.350406  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.350438  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.350450  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.350456  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.354039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:40.850576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:40.850604  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:40.850615  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:40.850620  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:40.854393  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.349882  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.349908  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.349919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.349925  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.353612  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.849701  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:41.849723  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:41.849732  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:41.849737  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:41.852781  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:41.853447  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:42.349592  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.349615  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.349624  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.349629  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.352747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:42.849858  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:42.849881  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:42.849889  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:42.849894  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:42.853198  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.350237  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.350265  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.350274  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.350278  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.353850  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.850187  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:43.850215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:43.850227  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:43.850232  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:43.853783  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:43.854292  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:44.349681  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.349707  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.349714  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.349719  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.353562  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:44.849731  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:44.849764  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:44.849775  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:44.849783  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:44.853689  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.349741  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.349768  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.349777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.349781  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.353601  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:45.849492  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:45.849515  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:45.849524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:45.849528  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:45.853061  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:46.349543  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.349573  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.349584  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.349589  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.352599  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:46.353168  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:46.850149  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:46.850169  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:46.850177  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:46.850182  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:46.854205  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:47.350169  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.350191  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.350200  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.350206  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.353664  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:47.849752  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:47.849780  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:47.849793  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:47.849798  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:47.853354  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.350356  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.350379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.350387  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.350391  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.353938  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:48.354537  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:48.849794  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:48.849820  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:48.849829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:48.849834  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:48.853163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.350186  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.350215  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.350224  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.350229  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.353713  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:49.849652  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:49.849676  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:49.849684  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:49.849687  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:49.853033  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.350113  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.350142  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.350153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.350159  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.353742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.849593  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:50.849613  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:50.849621  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:50.849624  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:50.852952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:50.853510  106017 node_ready.go:53] node "ha-565823-m02" has status "Ready":"False"
	I1212 00:00:51.349926  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.349948  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.349957  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.349963  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.353301  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:51.849615  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:51.849638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:51.849646  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:51.849655  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:51.853844  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.350547  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.350572  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.350580  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.350584  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.354248  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.850223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.850252  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.850263  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.850268  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.853470  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:52.854190  106017 node_ready.go:49] node "ha-565823-m02" has status "Ready":"True"
	I1212 00:00:52.854220  106017 node_ready.go:38] duration metric: took 20.504892955s for node "ha-565823-m02" to be "Ready" ...
	I1212 00:00:52.854231  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:52.854318  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:52.854327  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.854334  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.854339  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.859106  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:52.865543  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.865630  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:00:52.865638  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.865646  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.865651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.868523  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.869398  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.869413  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.869424  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.869431  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.871831  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.872543  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.872562  106017 pod_ready.go:82] duration metric: took 6.990987ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872571  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.872619  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:00:52.872627  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.872633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.872639  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.874818  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.875523  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.875541  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.875551  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.875557  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.877466  106017 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1212 00:00:52.878112  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.878131  106017 pod_ready.go:82] duration metric: took 5.554087ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878140  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.878190  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:00:52.878197  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.878204  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.878211  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.880364  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.880870  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:52.880885  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.880891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.880895  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.883116  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.883560  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.883576  106017 pod_ready.go:82] duration metric: took 5.430598ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883587  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.883672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:00:52.883682  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.883691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.883700  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.886455  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.887079  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:52.887092  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:52.887099  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:52.887104  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:52.889373  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:00:52.889794  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:52.889810  106017 pod_ready.go:82] duration metric: took 6.198051ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:52.889825  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.051288  106017 request.go:632] Waited for 161.36947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051368  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:00:53.051379  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.051390  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.051401  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.055000  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.251236  106017 request.go:632] Waited for 195.409824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251334  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:53.251344  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.251352  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.251356  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.254773  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.255341  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.255360  106017 pod_ready.go:82] duration metric: took 365.529115ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.255371  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.450696  106017 request.go:632] Waited for 195.24618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450768  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:00:53.450773  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.450782  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.450788  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.454132  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.650685  106017 request.go:632] Waited for 195.384956ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650745  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:53.650751  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.650758  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.650762  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.654400  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:53.655229  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:53.655251  106017 pod_ready.go:82] duration metric: took 399.872206ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.655268  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:53.850267  106017 request.go:632] Waited for 194.898023ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:00:53.850386  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:53.850398  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:53.850408  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:53.853683  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.050714  106017 request.go:632] Waited for 196.358846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050791  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.050798  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.050810  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.050821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.056588  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:54.057030  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.057048  106017 pod_ready.go:82] duration metric: took 401.768958ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.057064  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.251122  106017 request.go:632] Waited for 193.98571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251196  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:00:54.251202  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.251215  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.254477  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.451067  106017 request.go:632] Waited for 195.40262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451162  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:54.451179  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.451188  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.451192  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.455097  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.455639  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.455655  106017 pod_ready.go:82] duration metric: took 398.584366ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.455670  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.650842  106017 request.go:632] Waited for 195.080577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650913  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:00:54.650919  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.650926  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.650932  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.654798  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.851030  106017 request.go:632] Waited for 195.376895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851100  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:54.851111  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:54.851123  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:54.851133  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:54.854879  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:54.855493  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:54.855509  106017 pod_ready.go:82] duration metric: took 399.831743ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:54.855522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.050825  106017 request.go:632] Waited for 195.216303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050891  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:00:55.050897  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.050904  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.050910  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.055618  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.250720  106017 request.go:632] Waited for 194.371361ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250781  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:55.250786  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.250795  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.250802  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.255100  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.255613  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.255633  106017 pod_ready.go:82] duration metric: took 400.104583ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.255659  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.450909  106017 request.go:632] Waited for 195.147666ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450990  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:00:55.450999  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.451016  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.451026  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.455430  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:55.650645  106017 request.go:632] Waited for 194.425591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650713  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:00:55.650719  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.650727  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.650736  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.654680  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:55.655493  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:55.655512  106017 pod_ready.go:82] duration metric: took 399.840095ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.655522  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:55.850696  106017 request.go:632] Waited for 195.072101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:00:55.850769  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:55.850777  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:55.850782  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:55.855247  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.050354  106017 request.go:632] Waited for 194.294814ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050422  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:00:56.050428  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.050438  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.050441  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.053971  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:00:56.054426  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:00:56.054442  106017 pod_ready.go:82] duration metric: took 398.914314ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:00:56.054455  106017 pod_ready.go:39] duration metric: took 3.200213001s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:00:56.054475  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:00:56.054526  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:00:56.072661  106017 api_server.go:72] duration metric: took 23.930895419s to wait for apiserver process to appear ...
	I1212 00:00:56.072689  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:00:56.072711  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:00:56.077698  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:00:56.077790  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:00:56.077803  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.077813  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.077823  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.078602  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:00:56.078749  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:00:56.078777  106017 api_server.go:131] duration metric: took 6.080516ms to wait for apiserver health ...
	I1212 00:00:56.078787  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:00:56.251224  106017 request.go:632] Waited for 172.358728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251308  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.251314  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.251322  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.251328  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.257604  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:00:56.263097  106017 system_pods.go:59] 17 kube-system pods found
	I1212 00:00:56.263131  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.263138  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.263146  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.263154  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.263159  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.263164  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.263168  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.263173  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.263179  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.263184  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.263191  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.263197  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.263203  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.263211  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.263216  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.263222  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.263228  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.263239  106017 system_pods.go:74] duration metric: took 184.44261ms to wait for pod list to return data ...
	I1212 00:00:56.263253  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:00:56.450737  106017 request.go:632] Waited for 187.395152ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450799  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:00:56.450805  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.450817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.450824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.455806  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.456064  106017 default_sa.go:45] found service account: "default"
	I1212 00:00:56.456083  106017 default_sa.go:55] duration metric: took 192.823176ms for default service account to be created ...
	I1212 00:00:56.456093  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:00:56.650300  106017 request.go:632] Waited for 194.107546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650372  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:00:56.650380  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.650392  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.650403  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.656388  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:00:56.662029  106017 system_pods.go:86] 17 kube-system pods found
	I1212 00:00:56.662073  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:00:56.662082  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:00:56.662088  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:00:56.662094  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:00:56.662100  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:00:56.662108  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:00:56.662118  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:00:56.662124  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:00:56.662133  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:00:56.662140  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:00:56.662148  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:00:56.662153  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:00:56.662161  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:00:56.662165  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:00:56.662173  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:00:56.662178  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:00:56.662187  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:00:56.662196  106017 system_pods.go:126] duration metric: took 206.091251ms to wait for k8s-apps to be running ...
	I1212 00:00:56.662210  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:00:56.662262  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:00:56.679491  106017 system_svc.go:56] duration metric: took 17.268621ms WaitForService to wait for kubelet
	I1212 00:00:56.679526  106017 kubeadm.go:582] duration metric: took 24.537768524s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:00:56.679546  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:00:56.851276  106017 request.go:632] Waited for 171.630771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851341  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:00:56.851347  106017 round_trippers.go:469] Request Headers:
	I1212 00:00:56.851354  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:00:56.851363  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:00:56.856253  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:00:56.857605  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857634  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857650  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:00:56.857655  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:00:56.857661  106017 node_conditions.go:105] duration metric: took 178.109574ms to run NodePressure ...
	I1212 00:00:56.857683  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:00:56.857713  106017 start.go:255] writing updated cluster config ...
	I1212 00:00:56.859819  106017 out.go:201] 
	I1212 00:00:56.861355  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:00:56.861459  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.863133  106017 out.go:177] * Starting "ha-565823-m03" control-plane node in "ha-565823" cluster
	I1212 00:00:56.864330  106017 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:00:56.864351  106017 cache.go:56] Caching tarball of preloaded images
	I1212 00:00:56.864443  106017 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:00:56.864454  106017 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:00:56.864537  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:00:56.864703  106017 start.go:360] acquireMachinesLock for ha-565823-m03: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:00:56.864743  106017 start.go:364] duration metric: took 22.236µs to acquireMachinesLock for "ha-565823-m03"
	I1212 00:00:56.864764  106017 start.go:93] Provisioning new machine with config: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:00:56.864862  106017 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1212 00:00:56.866313  106017 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:00:56.866390  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:00:56.866430  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:00:56.881400  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40505
	I1212 00:00:56.881765  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:00:56.882247  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:00:56.882274  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:00:56.882594  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:00:56.882778  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:00:56.882918  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:00:56.883084  106017 start.go:159] libmachine.API.Create for "ha-565823" (driver="kvm2")
	I1212 00:00:56.883116  106017 client.go:168] LocalClient.Create starting
	I1212 00:00:56.883150  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:00:56.883194  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883215  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883281  106017 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:00:56.883314  106017 main.go:141] libmachine: Decoding PEM data...
	I1212 00:00:56.883330  106017 main.go:141] libmachine: Parsing certificate...
	I1212 00:00:56.883354  106017 main.go:141] libmachine: Running pre-create checks...
	I1212 00:00:56.883365  106017 main.go:141] libmachine: (ha-565823-m03) Calling .PreCreateCheck
	I1212 00:00:56.883572  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:00:56.883977  106017 main.go:141] libmachine: Creating machine...
	I1212 00:00:56.883994  106017 main.go:141] libmachine: (ha-565823-m03) Calling .Create
	I1212 00:00:56.884152  106017 main.go:141] libmachine: (ha-565823-m03) Creating KVM machine...
	I1212 00:00:56.885388  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing default KVM network
	I1212 00:00:56.885537  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found existing private KVM network mk-ha-565823
	I1212 00:00:56.885677  106017 main.go:141] libmachine: (ha-565823-m03) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:56.885696  106017 main.go:141] libmachine: (ha-565823-m03) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:00:56.885764  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:56.885674  106823 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:56.885859  106017 main.go:141] libmachine: (ha-565823-m03) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:00:57.157670  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.157529  106823 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa...
	I1212 00:00:57.207576  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207455  106823 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk...
	I1212 00:00:57.207627  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing magic tar header
	I1212 00:00:57.207643  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Writing SSH key tar header
	I1212 00:00:57.207726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:57.207648  106823 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 ...
	I1212 00:00:57.207776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03
	I1212 00:00:57.207803  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03 (perms=drwx------)
	I1212 00:00:57.207814  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:00:57.207826  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:00:57.207832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:00:57.207841  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:00:57.207846  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:00:57.207853  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Checking permissions on dir: /home
	I1212 00:00:57.207859  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Skipping /home - not owner
	I1212 00:00:57.207869  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:00:57.207875  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:00:57.207903  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:00:57.207923  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:00:57.207937  106017 main.go:141] libmachine: (ha-565823-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:00:57.207945  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:57.208764  106017 main.go:141] libmachine: (ha-565823-m03) define libvirt domain using xml: 
	I1212 00:00:57.208779  106017 main.go:141] libmachine: (ha-565823-m03) <domain type='kvm'>
	I1212 00:00:57.208785  106017 main.go:141] libmachine: (ha-565823-m03)   <name>ha-565823-m03</name>
	I1212 00:00:57.208790  106017 main.go:141] libmachine: (ha-565823-m03)   <memory unit='MiB'>2200</memory>
	I1212 00:00:57.208795  106017 main.go:141] libmachine: (ha-565823-m03)   <vcpu>2</vcpu>
	I1212 00:00:57.208799  106017 main.go:141] libmachine: (ha-565823-m03)   <features>
	I1212 00:00:57.208803  106017 main.go:141] libmachine: (ha-565823-m03)     <acpi/>
	I1212 00:00:57.208807  106017 main.go:141] libmachine: (ha-565823-m03)     <apic/>
	I1212 00:00:57.208816  106017 main.go:141] libmachine: (ha-565823-m03)     <pae/>
	I1212 00:00:57.208827  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.208832  106017 main.go:141] libmachine: (ha-565823-m03)   </features>
	I1212 00:00:57.208837  106017 main.go:141] libmachine: (ha-565823-m03)   <cpu mode='host-passthrough'>
	I1212 00:00:57.208849  106017 main.go:141] libmachine: (ha-565823-m03)   
	I1212 00:00:57.208858  106017 main.go:141] libmachine: (ha-565823-m03)   </cpu>
	I1212 00:00:57.208866  106017 main.go:141] libmachine: (ha-565823-m03)   <os>
	I1212 00:00:57.208875  106017 main.go:141] libmachine: (ha-565823-m03)     <type>hvm</type>
	I1212 00:00:57.208882  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='cdrom'/>
	I1212 00:00:57.208899  106017 main.go:141] libmachine: (ha-565823-m03)     <boot dev='hd'/>
	I1212 00:00:57.208912  106017 main.go:141] libmachine: (ha-565823-m03)     <bootmenu enable='no'/>
	I1212 00:00:57.208918  106017 main.go:141] libmachine: (ha-565823-m03)   </os>
	I1212 00:00:57.208926  106017 main.go:141] libmachine: (ha-565823-m03)   <devices>
	I1212 00:00:57.208933  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='cdrom'>
	I1212 00:00:57.208946  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/boot2docker.iso'/>
	I1212 00:00:57.208957  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hdc' bus='scsi'/>
	I1212 00:00:57.208964  106017 main.go:141] libmachine: (ha-565823-m03)       <readonly/>
	I1212 00:00:57.208971  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.208981  106017 main.go:141] libmachine: (ha-565823-m03)     <disk type='file' device='disk'>
	I1212 00:00:57.208993  106017 main.go:141] libmachine: (ha-565823-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:00:57.209040  106017 main.go:141] libmachine: (ha-565823-m03)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/ha-565823-m03.rawdisk'/>
	I1212 00:00:57.209066  106017 main.go:141] libmachine: (ha-565823-m03)       <target dev='hda' bus='virtio'/>
	I1212 00:00:57.209075  106017 main.go:141] libmachine: (ha-565823-m03)     </disk>
	I1212 00:00:57.209092  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209105  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='mk-ha-565823'/>
	I1212 00:00:57.209114  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209125  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209136  106017 main.go:141] libmachine: (ha-565823-m03)     <interface type='network'>
	I1212 00:00:57.209145  106017 main.go:141] libmachine: (ha-565823-m03)       <source network='default'/>
	I1212 00:00:57.209155  106017 main.go:141] libmachine: (ha-565823-m03)       <model type='virtio'/>
	I1212 00:00:57.209164  106017 main.go:141] libmachine: (ha-565823-m03)     </interface>
	I1212 00:00:57.209179  106017 main.go:141] libmachine: (ha-565823-m03)     <serial type='pty'>
	I1212 00:00:57.209191  106017 main.go:141] libmachine: (ha-565823-m03)       <target port='0'/>
	I1212 00:00:57.209198  106017 main.go:141] libmachine: (ha-565823-m03)     </serial>
	I1212 00:00:57.209211  106017 main.go:141] libmachine: (ha-565823-m03)     <console type='pty'>
	I1212 00:00:57.209219  106017 main.go:141] libmachine: (ha-565823-m03)       <target type='serial' port='0'/>
	I1212 00:00:57.209228  106017 main.go:141] libmachine: (ha-565823-m03)     </console>
	I1212 00:00:57.209238  106017 main.go:141] libmachine: (ha-565823-m03)     <rng model='virtio'>
	I1212 00:00:57.209275  106017 main.go:141] libmachine: (ha-565823-m03)       <backend model='random'>/dev/random</backend>
	I1212 00:00:57.209299  106017 main.go:141] libmachine: (ha-565823-m03)     </rng>
	I1212 00:00:57.209310  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209316  106017 main.go:141] libmachine: (ha-565823-m03)     
	I1212 00:00:57.209327  106017 main.go:141] libmachine: (ha-565823-m03)   </devices>
	I1212 00:00:57.209344  106017 main.go:141] libmachine: (ha-565823-m03) </domain>
	I1212 00:00:57.209358  106017 main.go:141] libmachine: (ha-565823-m03) 
	I1212 00:00:57.216296  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:a0:11:b6 in network default
	I1212 00:00:57.216833  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring networks are active...
	I1212 00:00:57.216849  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:57.217611  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network default is active
	I1212 00:00:57.217884  106017 main.go:141] libmachine: (ha-565823-m03) Ensuring network mk-ha-565823 is active
	I1212 00:00:57.218224  106017 main.go:141] libmachine: (ha-565823-m03) Getting domain xml...
	I1212 00:00:57.218920  106017 main.go:141] libmachine: (ha-565823-m03) Creating domain...
	I1212 00:00:58.452742  106017 main.go:141] libmachine: (ha-565823-m03) Waiting to get IP...
	I1212 00:00:58.453425  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.453790  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.453832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.453785  106823 retry.go:31] will retry after 272.104158ms: waiting for machine to come up
	I1212 00:00:58.727281  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:58.727898  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:58.727928  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:58.727841  106823 retry.go:31] will retry after 285.622453ms: waiting for machine to come up
	I1212 00:00:59.015493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.016037  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.016069  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.015997  106823 retry.go:31] will retry after 462.910385ms: waiting for machine to come up
	I1212 00:00:59.480661  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.481128  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.481154  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.481091  106823 retry.go:31] will retry after 428.639733ms: waiting for machine to come up
	I1212 00:00:59.911938  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:00:59.912474  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:00:59.912505  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:00:59.912415  106823 retry.go:31] will retry after 493.229639ms: waiting for machine to come up
	I1212 00:01:00.406997  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:00.407456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:00.407482  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:00.407400  106823 retry.go:31] will retry after 633.230425ms: waiting for machine to come up
	I1212 00:01:01.042449  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:01.042884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:01.042905  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:01.042838  106823 retry.go:31] will retry after 978.049608ms: waiting for machine to come up
	I1212 00:01:02.022776  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:02.023212  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:02.023245  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:02.023153  106823 retry.go:31] will retry after 1.111513755s: waiting for machine to come up
	I1212 00:01:03.136308  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:03.136734  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:03.136763  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:03.136679  106823 retry.go:31] will retry after 1.728462417s: waiting for machine to come up
	I1212 00:01:04.867619  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:04.868118  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:04.868157  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:04.868052  106823 retry.go:31] will retry after 1.898297589s: waiting for machine to come up
	I1212 00:01:06.769272  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:06.769757  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:06.769825  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:06.769731  106823 retry.go:31] will retry after 1.922578081s: waiting for machine to come up
	I1212 00:01:08.693477  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:08.693992  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:08.694026  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:08.693918  106823 retry.go:31] will retry after 2.235570034s: waiting for machine to come up
	I1212 00:01:10.932341  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:10.932805  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:10.932827  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:10.932750  106823 retry.go:31] will retry after 4.200404272s: waiting for machine to come up
	I1212 00:01:15.136581  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:15.136955  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find current IP address of domain ha-565823-m03 in network mk-ha-565823
	I1212 00:01:15.136979  106017 main.go:141] libmachine: (ha-565823-m03) DBG | I1212 00:01:15.136906  106823 retry.go:31] will retry after 4.331994391s: waiting for machine to come up
	I1212 00:01:19.472184  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472659  106017 main.go:141] libmachine: (ha-565823-m03) Found IP for machine: 192.168.39.95
	I1212 00:01:19.472679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has current primary IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.472686  106017 main.go:141] libmachine: (ha-565823-m03) Reserving static IP address...
	I1212 00:01:19.473105  106017 main.go:141] libmachine: (ha-565823-m03) DBG | unable to find host DHCP lease matching {name: "ha-565823-m03", mac: "52:54:00:03:bd:55", ip: "192.168.39.95"} in network mk-ha-565823
	I1212 00:01:19.544988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Getting to WaitForSSH function...
	I1212 00:01:19.545019  106017 main.go:141] libmachine: (ha-565823-m03) Reserved static IP address: 192.168.39.95
	I1212 00:01:19.545082  106017 main.go:141] libmachine: (ha-565823-m03) Waiting for SSH to be available...
	I1212 00:01:19.547914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548457  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:minikube Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.548493  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.548645  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH client type: external
	I1212 00:01:19.548672  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa (-rw-------)
	I1212 00:01:19.548700  106017 main.go:141] libmachine: (ha-565823-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.95 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:01:19.548714  106017 main.go:141] libmachine: (ha-565823-m03) DBG | About to run SSH command:
	I1212 00:01:19.548726  106017 main.go:141] libmachine: (ha-565823-m03) DBG | exit 0
	I1212 00:01:19.675749  106017 main.go:141] libmachine: (ha-565823-m03) DBG | SSH cmd err, output: <nil>: 
	I1212 00:01:19.676029  106017 main.go:141] libmachine: (ha-565823-m03) KVM machine creation complete!
	I1212 00:01:19.676360  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:19.676900  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677088  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:19.677296  106017 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:01:19.677311  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetState
	I1212 00:01:19.678472  106017 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:01:19.678488  106017 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:01:19.678497  106017 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:01:19.678505  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.680612  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.680988  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.681021  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.681172  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.681326  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681449  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.681545  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.681635  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.681832  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.681842  106017 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:01:19.794939  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:19.794969  106017 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:01:19.794980  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.797552  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.797884  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.797916  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.798040  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.798220  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798369  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.798507  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.798667  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.798834  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.798844  106017 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:01:19.912451  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:01:19.912540  106017 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:01:19.912555  106017 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:01:19.912568  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912805  106017 buildroot.go:166] provisioning hostname "ha-565823-m03"
	I1212 00:01:19.912831  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:19.912939  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:19.915606  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916027  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:19.916059  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:19.916213  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:19.916386  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916533  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:19.916630  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:19.916776  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:19.917012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:19.917027  106017 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823-m03 && echo "ha-565823-m03" | sudo tee /etc/hostname
	I1212 00:01:20.047071  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823-m03
	
	I1212 00:01:20.047100  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.049609  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050009  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.050034  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.050209  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.050389  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050537  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.050700  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.050854  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.051086  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.051105  106017 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:01:20.174838  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:01:20.174877  106017 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:01:20.174898  106017 buildroot.go:174] setting up certificates
	I1212 00:01:20.174909  106017 provision.go:84] configureAuth start
	I1212 00:01:20.174924  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetMachineName
	I1212 00:01:20.175232  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.177664  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178007  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.178038  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.178124  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.180472  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180778  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.180806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.180963  106017 provision.go:143] copyHostCerts
	I1212 00:01:20.180995  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181046  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:01:20.181058  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:01:20.181146  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:01:20.181242  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181266  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:01:20.181279  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:01:20.181315  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:01:20.181387  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181413  106017 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:01:20.181419  106017 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:01:20.181456  106017 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:01:20.181524  106017 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823-m03 san=[127.0.0.1 192.168.39.95 ha-565823-m03 localhost minikube]
	I1212 00:01:20.442822  106017 provision.go:177] copyRemoteCerts
	I1212 00:01:20.442883  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:01:20.442916  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.445614  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.445950  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.445983  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.446122  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.446304  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.446460  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.446571  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.533808  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:01:20.533894  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 00:01:20.558631  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:01:20.558695  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:01:20.584088  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:01:20.584173  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 00:01:20.608061  106017 provision.go:87] duration metric: took 433.135165ms to configureAuth
	I1212 00:01:20.608090  106017 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:01:20.608294  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:20.608371  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.611003  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611319  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.611348  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.611489  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.611709  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.611885  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.612026  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.612174  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.612326  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.612341  106017 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:01:20.847014  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:01:20.847049  106017 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:01:20.847062  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetURL
	I1212 00:01:20.848448  106017 main.go:141] libmachine: (ha-565823-m03) DBG | Using libvirt version 6000000
	I1212 00:01:20.850813  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851216  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.851246  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.851443  106017 main.go:141] libmachine: Docker is up and running!
	I1212 00:01:20.851459  106017 main.go:141] libmachine: Reticulating splines...
	I1212 00:01:20.851469  106017 client.go:171] duration metric: took 23.968343391s to LocalClient.Create
	I1212 00:01:20.851499  106017 start.go:167] duration metric: took 23.968416391s to libmachine.API.Create "ha-565823"
	I1212 00:01:20.851513  106017 start.go:293] postStartSetup for "ha-565823-m03" (driver="kvm2")
	I1212 00:01:20.851525  106017 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:01:20.851547  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:20.851812  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:01:20.851848  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.854066  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854470  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.854498  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.854683  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.854881  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.855047  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.855202  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:20.942769  106017 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:01:20.947268  106017 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:01:20.947295  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:01:20.947350  106017 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:01:20.947427  106017 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:01:20.947438  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:01:20.947517  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:01:20.957067  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:20.982552  106017 start.go:296] duration metric: took 131.024484ms for postStartSetup
	I1212 00:01:20.982610  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetConfigRaw
	I1212 00:01:20.983169  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:20.985456  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.985914  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.985943  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.986219  106017 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:01:20.986450  106017 start.go:128] duration metric: took 24.12157496s to createHost
	I1212 00:01:20.986480  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:20.988832  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989169  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:20.989192  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:20.989296  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:20.989476  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989596  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:20.989695  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:20.989852  106017 main.go:141] libmachine: Using SSH client type: native
	I1212 00:01:20.990012  106017 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I1212 00:01:20.990022  106017 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:01:21.104340  106017 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733961681.076284817
	
	I1212 00:01:21.104366  106017 fix.go:216] guest clock: 1733961681.076284817
	I1212 00:01:21.104376  106017 fix.go:229] Guest: 2024-12-12 00:01:21.076284817 +0000 UTC Remote: 2024-12-12 00:01:20.986466192 +0000 UTC m=+151.148293246 (delta=89.818625ms)
	I1212 00:01:21.104397  106017 fix.go:200] guest clock delta is within tolerance: 89.818625ms
	I1212 00:01:21.104403  106017 start.go:83] releasing machines lock for "ha-565823-m03", held for 24.239651482s
	I1212 00:01:21.104427  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.104703  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:21.107255  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.107654  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.107680  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.109803  106017 out.go:177] * Found network options:
	I1212 00:01:21.111036  106017 out.go:177]   - NO_PROXY=192.168.39.19,192.168.39.103
	W1212 00:01:21.112272  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.112293  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.112306  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112787  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.112963  106017 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:01:21.113063  106017 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:01:21.113107  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	W1212 00:01:21.113169  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	W1212 00:01:21.113192  106017 proxy.go:119] fail to check proxy env: Error ip not in block
	I1212 00:01:21.113246  106017 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:01:21.113266  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:01:21.115806  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.115895  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116242  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116269  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116313  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:21.116334  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:21.116399  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116570  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:01:21.116593  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116694  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:01:21.116713  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116861  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:01:21.116856  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.116989  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:01:21.354040  106017 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:01:21.360555  106017 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:01:21.360632  106017 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:01:21.379750  106017 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:01:21.379780  106017 start.go:495] detecting cgroup driver to use...
	I1212 00:01:21.379863  106017 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:01:21.395389  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:01:21.409350  106017 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:01:21.409431  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:01:21.425472  106017 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:01:21.440472  106017 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:01:21.567746  106017 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:01:21.711488  106017 docker.go:233] disabling docker service ...
	I1212 00:01:21.711577  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:01:21.727302  106017 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:01:21.740916  106017 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:01:21.878118  106017 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:01:22.013165  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:01:22.031377  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:01:22.050768  106017 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:01:22.050841  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.062469  106017 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:01:22.062542  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.074854  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.085834  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.096567  106017 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:01:22.110009  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.121122  106017 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.139153  106017 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:01:22.150221  106017 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:01:22.160252  106017 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:01:22.160329  106017 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:01:22.175082  106017 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:01:22.185329  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:22.327197  106017 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:01:22.421776  106017 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:01:22.421853  106017 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:01:22.427874  106017 start.go:563] Will wait 60s for crictl version
	I1212 00:01:22.427937  106017 ssh_runner.go:195] Run: which crictl
	I1212 00:01:22.432412  106017 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:01:22.478561  106017 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:01:22.478659  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.507894  106017 ssh_runner.go:195] Run: crio --version
	I1212 00:01:22.541025  106017 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:01:22.542600  106017 out.go:177]   - env NO_PROXY=192.168.39.19
	I1212 00:01:22.544205  106017 out.go:177]   - env NO_PROXY=192.168.39.19,192.168.39.103
	I1212 00:01:22.545527  106017 main.go:141] libmachine: (ha-565823-m03) Calling .GetIP
	I1212 00:01:22.548679  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549115  106017 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:01:22.549143  106017 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:01:22.549402  106017 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:01:22.553987  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:22.567227  106017 mustload.go:65] Loading cluster: ha-565823
	I1212 00:01:22.567647  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:22.568059  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.568178  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.583960  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44539
	I1212 00:01:22.584451  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.584977  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.585002  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.585378  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.585624  106017 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:01:22.587277  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:22.587636  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:22.587686  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:22.602128  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43965
	I1212 00:01:22.602635  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:22.603141  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:22.603163  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:22.603490  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:22.603676  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:22.603824  106017 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.95
	I1212 00:01:22.603837  106017 certs.go:194] generating shared ca certs ...
	I1212 00:01:22.603856  106017 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.603989  106017 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:01:22.604025  106017 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:01:22.604035  106017 certs.go:256] generating profile certs ...
	I1212 00:01:22.604113  106017 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:01:22.604138  106017 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c
	I1212 00:01:22.604153  106017 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.95 192.168.39.254]
	I1212 00:01:22.747110  106017 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c ...
	I1212 00:01:22.747151  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c: {Name:mke6cc66706783f55b7ebb6ba30cc07d7c6eb29b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747333  106017 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c ...
	I1212 00:01:22.747345  106017 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c: {Name:mk0abaf339db164c799eddef60276ad5fb5ed33d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:01:22.747431  106017 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:01:22.747642  106017 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.bab6a67c -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:01:22.747827  106017 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:01:22.747853  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:01:22.747874  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:01:22.747894  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:01:22.747911  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:01:22.747929  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:01:22.747949  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:01:22.747967  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:01:22.767751  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:01:22.767871  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:01:22.767924  106017 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:01:22.767939  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:01:22.767972  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:01:22.768009  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:01:22.768041  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:01:22.768088  106017 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:01:22.768123  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:22.768140  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:01:22.768153  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:01:22.768246  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:22.771620  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772074  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:22.772105  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:22.772278  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:22.772487  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:22.772661  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:22.772805  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:22.855976  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 00:01:22.862422  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 00:01:22.875336  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 00:01:22.881430  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 00:01:22.892620  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 00:01:22.897804  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 00:01:22.910746  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 00:01:22.916511  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1212 00:01:22.927437  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 00:01:22.932403  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 00:01:22.945174  106017 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 00:01:22.949699  106017 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 00:01:22.963425  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:01:22.991332  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:01:23.014716  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:01:23.038094  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:01:23.062120  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1212 00:01:23.086604  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:01:23.110420  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:01:23.136037  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:01:23.162577  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:01:23.188311  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:01:23.211713  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:01:23.235230  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 00:01:23.253375  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 00:01:23.271455  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 00:01:23.289505  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1212 00:01:23.307850  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 00:01:23.325848  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 00:01:23.344038  106017 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 00:01:23.362393  106017 ssh_runner.go:195] Run: openssl version
	I1212 00:01:23.368722  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:01:23.380405  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385472  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.385534  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:01:23.392130  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:01:23.405241  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:01:23.418140  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422762  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.422819  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:01:23.428754  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:01:23.441496  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:01:23.454394  106017 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459170  106017 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.459227  106017 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:01:23.465192  106017 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:01:23.476720  106017 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:01:23.481551  106017 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:01:23.481615  106017 kubeadm.go:934] updating node {m03 192.168.39.95 8443 v1.31.2 crio true true} ...
	I1212 00:01:23.481715  106017 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:01:23.481752  106017 kube-vip.go:115] generating kube-vip config ...
	I1212 00:01:23.481784  106017 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:01:23.499895  106017 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:01:23.499971  106017 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:01:23.500042  106017 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.510617  106017 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1212 00:01:23.510681  106017 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1212 00:01:23.520696  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1212 00:01:23.520748  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:01:23.520697  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1212 00:01:23.520779  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520698  106017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1212 00:01:23.520844  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.520847  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1212 00:01:23.520904  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1212 00:01:23.539476  106017 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539619  106017 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1212 00:01:23.539628  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1212 00:01:23.539658  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1212 00:01:23.539704  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1212 00:01:23.539735  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1212 00:01:23.554300  106017 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1212 00:01:23.554341  106017 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1212 00:01:24.410276  106017 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 00:01:24.421207  106017 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1212 00:01:24.438691  106017 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:01:24.456935  106017 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:01:24.474104  106017 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:01:24.478799  106017 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:01:24.492116  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:24.635069  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:24.653898  106017 host.go:66] Checking if "ha-565823" exists ...
	I1212 00:01:24.654454  106017 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:01:24.654529  106017 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:01:24.669805  106017 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36453
	I1212 00:01:24.670391  106017 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:01:24.671018  106017 main.go:141] libmachine: Using API Version  1
	I1212 00:01:24.671047  106017 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:01:24.671400  106017 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:01:24.671580  106017 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:01:24.671761  106017 start.go:317] joinCluster: &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:01:24.671883  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1212 00:01:24.671905  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:01:24.675034  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675479  106017 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:01:24.675501  106017 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:01:24.675693  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:01:24.675871  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:01:24.676006  106017 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:01:24.676127  106017 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:01:24.845860  106017 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:24.845904  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443"
	I1212 00:01:47.124612  106017 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 4sbqiu.4yic5pe52bxp935w --discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565823-m03 --control-plane --apiserver-advertise-address=192.168.39.95 --apiserver-bind-port=8443": (22.27867542s)
	I1212 00:01:47.124662  106017 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1212 00:01:47.623528  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565823-m03 minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=ha-565823 minikube.k8s.io/primary=false
	I1212 00:01:47.763869  106017 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565823-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1212 00:01:47.919307  106017 start.go:319] duration metric: took 23.247542297s to joinCluster
	I1212 00:01:47.919407  106017 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:01:47.919784  106017 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:01:47.920983  106017 out.go:177] * Verifying Kubernetes components...
	I1212 00:01:47.922471  106017 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:01:48.195755  106017 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:01:48.249445  106017 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:01:48.249790  106017 kapi.go:59] client config for ha-565823: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 00:01:48.249881  106017 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.19:8443
	I1212 00:01:48.250202  106017 node_ready.go:35] waiting up to 6m0s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:01:48.250300  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.250311  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.250329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.250338  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.255147  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:48.750647  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:48.750680  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:48.750691  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:48.750699  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:48.755066  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:49.251152  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.251203  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.251216  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.251222  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.254927  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:49.751403  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:49.751424  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:49.751432  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:49.751436  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:49.754669  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.250595  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.250620  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.250629  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.250633  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.254009  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:50.254537  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:50.751206  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:50.751237  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:50.751250  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:50.751256  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:50.755159  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:51.250921  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.250950  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.250961  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.250967  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.255349  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:51.751245  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:51.751270  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:51.751283  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:51.751290  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:51.755162  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.250889  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.250916  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.250929  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.250935  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.254351  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:52.255115  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:52.750458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:52.750481  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:52.750492  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:52.750499  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:52.753763  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:53.251029  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.251058  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.251071  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.251077  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.256338  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:01:53.751364  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:53.751389  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:53.751401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:53.751414  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:53.754657  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.250629  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.250665  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.250675  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.250680  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.254457  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:54.255509  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:54.750450  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:54.750484  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:54.750496  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:54.750502  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:54.753928  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.251309  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.251338  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.251347  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.251351  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.254751  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:55.751050  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:55.751076  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:55.751089  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:55.751093  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:55.755810  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:01:56.250473  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.250504  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.250524  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.250530  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.253711  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.751414  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:56.751435  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:56.751444  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:56.751449  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:56.755218  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:56.755864  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:57.251118  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.251142  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.251150  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.251154  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.254747  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:57.750776  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:57.750806  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:57.750817  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:57.750829  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:57.754143  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.251295  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.251320  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.251329  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.251333  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.254626  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:58.750576  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:58.750599  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:58.750608  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:58.750611  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:58.754105  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.251173  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.251200  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.251209  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.251213  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.254355  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:01:59.255121  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:01:59.750953  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:01:59.750977  106017 round_trippers.go:469] Request Headers:
	I1212 00:01:59.750985  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:01:59.750989  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:01:59.754627  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.250978  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.251004  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.251013  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.251016  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.254467  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:00.750877  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:00.750901  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:00.750912  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:00.750918  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:00.754221  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.251370  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.251393  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.251401  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.251405  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.254805  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:01.255406  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:01.750655  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:01.750676  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:01.750684  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:01.750690  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:01.753736  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.251367  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.251390  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.251399  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.251403  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.255039  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:02.750915  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:02.750948  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:02.750958  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:02.750964  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:02.754145  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:03.250760  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.250788  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.250798  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.250805  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.260534  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:03.261313  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:03.750548  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:03.750571  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:03.750582  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:03.750587  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:03.753887  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.250808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.250830  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.250838  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.250841  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.254163  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:04.750428  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:04.750453  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:04.750464  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:04.750469  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:04.754235  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.251014  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.251038  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.251053  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.251061  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.254268  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.751257  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:05.751286  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:05.751300  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:05.751309  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:05.754346  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:05.755137  106017 node_ready.go:53] node "ha-565823-m03" has status "Ready":"False"
	I1212 00:02:06.250474  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.250500  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.250510  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.250515  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.253901  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:06.751012  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:06.751043  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:06.751062  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:06.751067  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:06.755777  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:07.250458  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.250481  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.250489  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.250494  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.254349  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.751140  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:07.751164  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.751172  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.751178  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.754545  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:07.755268  106017 node_ready.go:49] node "ha-565823-m03" has status "Ready":"True"
	I1212 00:02:07.755289  106017 node_ready.go:38] duration metric: took 19.505070997s for node "ha-565823-m03" to be "Ready" ...
	I1212 00:02:07.755298  106017 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:07.755371  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:07.755381  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.755388  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.755394  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.764865  106017 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1212 00:02:07.771847  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.771957  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4q46c
	I1212 00:02:07.771969  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.771979  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.771985  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.774662  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.775180  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.775197  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.775207  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.775212  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.778204  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.778657  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.778673  106017 pod_ready.go:82] duration metric: took 6.798091ms for pod "coredns-7c65d6cfc9-4q46c" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778684  106017 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.778739  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mqzbv
	I1212 00:02:07.778749  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.778759  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.778766  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.780968  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.781650  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.781667  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.781674  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.781679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.783908  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.784542  106017 pod_ready.go:93] pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.784564  106017 pod_ready.go:82] duration metric: took 5.872725ms for pod "coredns-7c65d6cfc9-mqzbv" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784576  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.784636  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823
	I1212 00:02:07.784644  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.784651  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.784657  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.786892  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.787666  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:07.787681  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.787688  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.787694  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.789880  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.790470  106017 pod_ready.go:93] pod "etcd-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.790486  106017 pod_ready.go:82] duration metric: took 5.899971ms for pod "etcd-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790494  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.790537  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m02
	I1212 00:02:07.790545  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.790552  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.790555  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.793137  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.793764  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:07.793781  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.793791  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.793799  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.796241  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:07.796610  106017 pod_ready.go:93] pod "etcd-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:07.796625  106017 pod_ready.go:82] duration metric: took 6.124204ms for pod "etcd-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.796636  106017 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:07.952109  106017 request.go:632] Waited for 155.381921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952174  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565823-m03
	I1212 00:02:07.952179  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:07.952187  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:07.952193  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:07.955641  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.151556  106017 request.go:632] Waited for 195.239119ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151668  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:08.151684  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.151694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.151702  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.154961  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.155639  106017 pod_ready.go:93] pod "etcd-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.155660  106017 pod_ready.go:82] duration metric: took 359.016335ms for pod "etcd-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.155677  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.351679  106017 request.go:632] Waited for 195.932687ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351780  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823
	I1212 00:02:08.351790  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.351808  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.351821  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.355049  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.552214  106017 request.go:632] Waited for 196.357688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552278  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:08.552283  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.552291  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.552295  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.555420  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.555971  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.555995  106017 pod_ready.go:82] duration metric: took 400.310286ms for pod "kube-apiserver-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.556009  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.752055  106017 request.go:632] Waited for 195.936446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752134  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m02
	I1212 00:02:08.752141  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.752152  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.752161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.755742  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:08.951367  106017 request.go:632] Waited for 194.249731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951449  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:08.951462  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:08.951477  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:08.951487  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:08.956306  106017 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1212 00:02:08.956889  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:08.956911  106017 pod_ready.go:82] duration metric: took 400.890038ms for pod "kube-apiserver-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:08.956924  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.152049  106017 request.go:632] Waited for 195.045457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152139  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565823-m03
	I1212 00:02:09.152145  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.152153  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.152158  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.155700  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.351978  106017 request.go:632] Waited for 195.381489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352057  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:09.352066  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.352075  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.352081  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.355842  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.356358  106017 pod_ready.go:93] pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.356379  106017 pod_ready.go:82] duration metric: took 399.447689ms for pod "kube-apiserver-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.356389  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.551411  106017 request.go:632] Waited for 194.933011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551471  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823
	I1212 00:02:09.551476  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.551485  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.551489  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.554894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.751755  106017 request.go:632] Waited for 196.244381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751835  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:09.751841  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.751848  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.751854  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.754952  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:09.755722  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:09.755745  106017 pod_ready.go:82] duration metric: took 399.345607ms for pod "kube-controller-manager-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.755761  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:09.951966  106017 request.go:632] Waited for 196.120958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952068  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m02
	I1212 00:02:09.952080  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:09.952092  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:09.952104  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:09.955804  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.152052  106017 request.go:632] Waited for 195.597395ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152141  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:10.152152  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.152161  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.152166  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.155038  106017 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1212 00:02:10.155549  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.155569  106017 pod_ready.go:82] duration metric: took 399.796008ms for pod "kube-controller-manager-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.155583  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.351722  106017 request.go:632] Waited for 196.013906ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351803  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565823-m03
	I1212 00:02:10.351811  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.351826  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.351837  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.355190  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.551684  106017 request.go:632] Waited for 195.377569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551808  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:10.551816  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.551824  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.551829  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.555651  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.556178  106017 pod_ready.go:93] pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.556199  106017 pod_ready.go:82] duration metric: took 400.605936ms for pod "kube-controller-manager-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.556213  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.751531  106017 request.go:632] Waited for 195.242482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751632  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr5qc
	I1212 00:02:10.751654  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.751669  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.751679  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.755253  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.951536  106017 request.go:632] Waited for 195.352907ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951607  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:10.951622  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:10.951633  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:10.951641  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:10.954707  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:10.955175  106017 pod_ready.go:93] pod "kube-proxy-hr5qc" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:10.955193  106017 pod_ready.go:82] duration metric: took 398.973413ms for pod "kube-proxy-hr5qc" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:10.955204  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.151212  106017 request.go:632] Waited for 195.914198ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151269  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-klpqs
	I1212 00:02:11.151274  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.151282  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.151285  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.154675  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.351669  106017 request.go:632] Waited for 196.350446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351765  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:11.351776  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.351788  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.351796  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.354976  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.355603  106017 pod_ready.go:93] pod "kube-proxy-klpqs" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.355620  106017 pod_ready.go:82] duration metric: took 400.410567ms for pod "kube-proxy-klpqs" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.355631  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.551803  106017 request.go:632] Waited for 196.076188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551880  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2lsd
	I1212 00:02:11.551892  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.551903  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.551915  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.555786  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.751843  106017 request.go:632] Waited for 195.375551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751907  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:11.751912  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.751919  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.751924  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.755210  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:11.755911  106017 pod_ready.go:93] pod "kube-proxy-p2lsd" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:11.755936  106017 pod_ready.go:82] duration metric: took 400.297319ms for pod "kube-proxy-p2lsd" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.755951  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:11.951789  106017 request.go:632] Waited for 195.74885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951866  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823
	I1212 00:02:11.951874  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:11.951891  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:11.951904  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:11.955633  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.152006  106017 request.go:632] Waited for 195.692099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152097  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823
	I1212 00:02:12.152112  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.152120  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.152125  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.155247  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.155984  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.156005  106017 pod_ready.go:82] duration metric: took 400.045384ms for pod "kube-scheduler-ha-565823" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.156015  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.352045  106017 request.go:632] Waited for 195.938605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352121  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m02
	I1212 00:02:12.352126  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.352134  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.352143  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.355894  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.551904  106017 request.go:632] Waited for 195.351995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551970  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m02
	I1212 00:02:12.551977  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.551988  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.551993  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.555652  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.556289  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.556309  106017 pod_ready.go:82] duration metric: took 400.287227ms for pod "kube-scheduler-ha-565823-m02" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.556319  106017 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.751148  106017 request.go:632] Waited for 194.747976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751223  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565823-m03
	I1212 00:02:12.751231  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.751244  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.751260  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.754576  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.951572  106017 request.go:632] Waited for 196.386091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951672  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes/ha-565823-m03
	I1212 00:02:12.951678  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.951689  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.951693  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.954814  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:12.955311  106017 pod_ready.go:93] pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace has status "Ready":"True"
	I1212 00:02:12.955329  106017 pod_ready.go:82] duration metric: took 398.995551ms for pod "kube-scheduler-ha-565823-m03" in "kube-system" namespace to be "Ready" ...
	I1212 00:02:12.955348  106017 pod_ready.go:39] duration metric: took 5.200033872s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:02:12.955369  106017 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:02:12.955437  106017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:02:12.971324  106017 api_server.go:72] duration metric: took 25.051879033s to wait for apiserver process to appear ...
	I1212 00:02:12.971354  106017 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:02:12.971379  106017 api_server.go:253] Checking apiserver healthz at https://192.168.39.19:8443/healthz ...
	I1212 00:02:12.977750  106017 api_server.go:279] https://192.168.39.19:8443/healthz returned 200:
	ok
	I1212 00:02:12.977832  106017 round_trippers.go:463] GET https://192.168.39.19:8443/version
	I1212 00:02:12.977843  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:12.977856  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:12.977863  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:12.978833  106017 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1212 00:02:12.978904  106017 api_server.go:141] control plane version: v1.31.2
	I1212 00:02:12.978918  106017 api_server.go:131] duration metric: took 7.558877ms to wait for apiserver health ...
	I1212 00:02:12.978926  106017 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:02:13.151199  106017 request.go:632] Waited for 172.198927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151292  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.151303  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.151316  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.151325  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.157197  106017 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1212 00:02:13.164153  106017 system_pods.go:59] 24 kube-system pods found
	I1212 00:02:13.164182  106017 system_pods.go:61] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.164187  106017 system_pods.go:61] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.164191  106017 system_pods.go:61] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.164194  106017 system_pods.go:61] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.164197  106017 system_pods.go:61] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.164200  106017 system_pods.go:61] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.164203  106017 system_pods.go:61] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.164206  106017 system_pods.go:61] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.164209  106017 system_pods.go:61] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.164211  106017 system_pods.go:61] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.164214  106017 system_pods.go:61] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.164218  106017 system_pods.go:61] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.164221  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.164224  106017 system_pods.go:61] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.164227  106017 system_pods.go:61] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.164230  106017 system_pods.go:61] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.164233  106017 system_pods.go:61] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.164236  106017 system_pods.go:61] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.164240  106017 system_pods.go:61] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.164243  106017 system_pods.go:61] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.164246  106017 system_pods.go:61] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.164249  106017 system_pods.go:61] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.164251  106017 system_pods.go:61] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.164254  106017 system_pods.go:61] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.164259  106017 system_pods.go:74] duration metric: took 185.327636ms to wait for pod list to return data ...
	I1212 00:02:13.164271  106017 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:02:13.351702  106017 request.go:632] Waited for 187.33366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351785  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/default/serviceaccounts
	I1212 00:02:13.351793  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.351804  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.351814  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.355589  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.355716  106017 default_sa.go:45] found service account: "default"
	I1212 00:02:13.355732  106017 default_sa.go:55] duration metric: took 191.453257ms for default service account to be created ...
	I1212 00:02:13.355741  106017 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:02:13.552179  106017 request.go:632] Waited for 196.355674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552246  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/namespaces/kube-system/pods
	I1212 00:02:13.552253  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.552265  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.552274  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.558546  106017 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1212 00:02:13.567311  106017 system_pods.go:86] 24 kube-system pods found
	I1212 00:02:13.567335  106017 system_pods.go:89] "coredns-7c65d6cfc9-4q46c" [0b135b50-44c6-455c-85c0-d72033038d11] Running
	I1212 00:02:13.567341  106017 system_pods.go:89] "coredns-7c65d6cfc9-mqzbv" [0103eb36-35d9-48da-9244-89cc2ea25ec4] Running
	I1212 00:02:13.567345  106017 system_pods.go:89] "etcd-ha-565823" [1f96a46f-dc8e-4251-ac9a-8559b8cc62c1] Running
	I1212 00:02:13.567349  106017 system_pods.go:89] "etcd-ha-565823-m02" [88741dde-fdfa-4d57-b65e-72d0fbdca2f0] Running
	I1212 00:02:13.567352  106017 system_pods.go:89] "etcd-ha-565823-m03" [506e75d1-9e81-4c24-bf45-26f7fde169fa] Running
	I1212 00:02:13.567355  106017 system_pods.go:89] "kindnet-hz9rk" [1198ce2d-aac5-4e9f-9605-22e06dc18348] Running
	I1212 00:02:13.567359  106017 system_pods.go:89] "kindnet-jffrr" [d455764c-714e-4a39-9d11-1fc4ab3ae0c9] Running
	I1212 00:02:13.567362  106017 system_pods.go:89] "kindnet-kr5js" [752782e7-ecce-4a3a-95dd-ff734ded5684] Running
	I1212 00:02:13.567366  106017 system_pods.go:89] "kube-apiserver-ha-565823" [e68bf1a4-affd-4e12-9bbd-c0f4272e4076] Running
	I1212 00:02:13.567369  106017 system_pods.go:89] "kube-apiserver-ha-565823-m02" [a9e529c5-c273-4673-b7ab-0f50d09a6ff1] Running
	I1212 00:02:13.567373  106017 system_pods.go:89] "kube-apiserver-ha-565823-m03" [636f5858-1c42-480d-9810-abf8aa16aa69] Running
	I1212 00:02:13.567377  106017 system_pods.go:89] "kube-controller-manager-ha-565823" [ccf0bb46-7d19-44c0-b701-5d82443baec7] Running
	I1212 00:02:13.567380  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m02" [0bfe4a85-ff70-4cd5-b9d6-bf175dfbe525] Running
	I1212 00:02:13.567384  106017 system_pods.go:89] "kube-controller-manager-ha-565823-m03" [47632e43-a401-4553-9bba-e8296023a6a2] Running
	I1212 00:02:13.567387  106017 system_pods.go:89] "kube-proxy-hr5qc" [88445d08-4d68-4ca2-b91a-125924b109da] Running
	I1212 00:02:13.567390  106017 system_pods.go:89] "kube-proxy-klpqs" [42725ff5-dd5d-455f-a29a-9ce6c4b8f810] Running
	I1212 00:02:13.567393  106017 system_pods.go:89] "kube-proxy-p2lsd" [1682cacb-a489-4f19-a32e-5618bc038aa6] Running
	I1212 00:02:13.567396  106017 system_pods.go:89] "kube-scheduler-ha-565823" [a8e11855-0b26-481b-b1d8-dfdd3e2f51bb] Running
	I1212 00:02:13.567400  106017 system_pods.go:89] "kube-scheduler-ha-565823-m02" [6496a7db-b4d3-4619-b359-129fd9628c18] Running
	I1212 00:02:13.567404  106017 system_pods.go:89] "kube-scheduler-ha-565823-m03" [467b67ab-33b8-4e90-b3d7-73f233c0a9e2] Running
	I1212 00:02:13.567406  106017 system_pods.go:89] "kube-vip-ha-565823" [25c710fa-1329-4008-9e3b-1f72904a6310] Running
	I1212 00:02:13.567411  106017 system_pods.go:89] "kube-vip-ha-565823-m02" [890bfead-5786-4501-8933-1800ce2d94fc] Running
	I1212 00:02:13.567416  106017 system_pods.go:89] "kube-vip-ha-565823-m03" [768639dc-dd70-4124-99c0-4e4d9b9bb9b5] Running
	I1212 00:02:13.567419  106017 system_pods.go:89] "storage-provisioner" [b87f311a-6a5e-42bd-8091-6b771551e24c] Running
	I1212 00:02:13.567425  106017 system_pods.go:126] duration metric: took 211.677185ms to wait for k8s-apps to be running ...
	I1212 00:02:13.567435  106017 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:02:13.567479  106017 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:02:13.584100  106017 system_svc.go:56] duration metric: took 16.645631ms WaitForService to wait for kubelet
	I1212 00:02:13.584137  106017 kubeadm.go:582] duration metric: took 25.664696546s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:02:13.584164  106017 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:02:13.751620  106017 request.go:632] Waited for 167.335283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751682  106017 round_trippers.go:463] GET https://192.168.39.19:8443/api/v1/nodes
	I1212 00:02:13.751687  106017 round_trippers.go:469] Request Headers:
	I1212 00:02:13.751694  106017 round_trippers.go:473]     Accept: application/json, */*
	I1212 00:02:13.751707  106017 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1212 00:02:13.755649  106017 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1212 00:02:13.756501  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756522  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756532  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756535  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756538  106017 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:02:13.756541  106017 node_conditions.go:123] node cpu capacity is 2
	I1212 00:02:13.756545  106017 node_conditions.go:105] duration metric: took 172.375714ms to run NodePressure ...
	I1212 00:02:13.756565  106017 start.go:241] waiting for startup goroutines ...
	I1212 00:02:13.756588  106017 start.go:255] writing updated cluster config ...
	I1212 00:02:13.756868  106017 ssh_runner.go:195] Run: rm -f paused
	I1212 00:02:13.808453  106017 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 00:02:13.810275  106017 out.go:177] * Done! kubectl is now configured to use "ha-565823" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.761883468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961975761857769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3e876b1-0429-49c5-a09b-d7d06648a861 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.762501359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f3da1fd4-3f5d-4375-924f-8ca46ee15cb8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.762582126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f3da1fd4-3f5d-4375-924f-8ca46ee15cb8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.763610944Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f3da1fd4-3f5d-4375-924f-8ca46ee15cb8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.809931266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=902406a6-5954-4de7-8907-d240e5165a5e name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.810024650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=902406a6-5954-4de7-8907-d240e5165a5e name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.810917300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=470b2870-540e-4076-8244-f9720b7b53c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.811427068Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961975811403450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=470b2870-540e-4076-8244-f9720b7b53c4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.811974699Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d08b312a-8721-4cf5-bb0e-8857317c2a0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.812045713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d08b312a-8721-4cf5-bb0e-8857317c2a0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.812343545Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d08b312a-8721-4cf5-bb0e-8857317c2a0b name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.854814986Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26c23b8d-0a35-41ce-aafa-e20957cac0f8 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.854892194Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26c23b8d-0a35-41ce-aafa-e20957cac0f8 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.856779118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37d467ba-17aa-46c5-99d4-6f98c9deb50b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.857499445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961975857469580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37d467ba-17aa-46c5-99d4-6f98c9deb50b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.858112181Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f9f55fd-9759-46fb-94e3-25f9972d51ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.858168554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f9f55fd-9759-46fb-94e3-25f9972d51ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.858383734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f9f55fd-9759-46fb-94e3-25f9972d51ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.901690916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b168bce-288d-4156-a532-3f20ce289d1d name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.901805622Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b168bce-288d-4156-a532-3f20ce289d1d name=/runtime.v1.RuntimeService/Version
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.903186027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15b43115-1873-4445-9fd4-ce127b685b05 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.903689252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961975903663510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15b43115-1873-4445-9fd4-ce127b685b05 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.904369525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=10492270-f6db-4387-b147-befa353350e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.904423685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=10492270-f6db-4387-b147-befa353350e0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:06:15 ha-565823 crio[664]: time="2024-12-12 00:06:15.904661975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0043af06cb9226ebe47b67f9d15a30d87546d6f49538bbb415d3474988fbc56,PodSandboxId:0d77818a442cebd88747ca7b32cc66cf4b7a762a2477c83570112898293db3eb,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733961741043581575,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-x4p94,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c1cc1db-013c-4f02-bc24-0e633c565129,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481,PodSandboxId:ab4dd7022ef59a6c1c03400a0acc77ae43c6989f613e95d8c2d40b985dc2fadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598062406556,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mqzbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0103eb36-35d9-48da-9244-89cc2ea25ec4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3,PodSandboxId:2787b4f317bfab4b0fad8f2f9e12dbca81f27b034687933a718437d26d8cd412,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733961598002322638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4q46c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
0b135b50-44c6-455c-85c0-d72033038d11,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba4c8c97ea09031da0b679b2d4068b11a5dbf621e1e15f9788241d12ef8218e3,PodSandboxId:4161eb9de6ddb004ebad01cea7ce55f4f50a49a78809249c73346dbdc6744843,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1733961597923499007,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b87f311a-6a5e-42bd-8091-6b771551e24c,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098,PodSandboxId:332b05e74370f4b115ead0a36f3a55b81aced85616dfd3616458f6a200d7a3f8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e,State:CO
NTAINER_RUNNING,CreatedAt:1733961585927004330,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hz9rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1198ce2d-aac5-4e9f-9605-22e06dc18348,},Annotations:map[string]string{io.kubernetes.container.hash: 9533276,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57,PodSandboxId:920e405616cded9c7837ae4402dae34a01840c97199be9309231565344266483,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733961581
168643463,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hr5qc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88445d08-4d68-4ca2-b91a-125924b109da,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778,PodSandboxId:87c6df22f89765e0f3017c2719348bd647b876a8c1604f3f1ba5be67b85021b5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173396157355
7802551,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaa7a8577c4c0d2b65a93222694855a4,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1,PodSandboxId:0ab557e831fb3eddef52b36ec11c6347f939b5dbe4814866869f056bc56ff11a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733961569170690271,Labels:map[strin
g]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fddadc76c4b2da11fc48dabaf0f7ded,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b,PodSandboxId:e6c331c3b34391d3ab5a93fd30799fa9b0305433ed3f5b33b3b99889997900f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733961569122150072,Labels:map[string]string{io.k
ubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a76b767f5584521bc3a8a4e6679c0b2e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95,PodSandboxId:d851e6de61a681ef6ef030a85d51814d4c786d0aecf12ec1647964b01a18a8b3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733961569095720588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,
io.kubernetes.pod.name: kube-apiserver-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fae40d20051ab63ee6c84f456649100b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4,PodSandboxId:a6c5b082d192453941a1a181b0e9f9a1f7d2cce65bcdfcce237cda7a9bc51e50,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733961569081937973,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.nam
e: kube-scheduler-ha-565823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a41f6f20361d8a099d64b4adbb7842d4,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=10492270-f6db-4387-b147-befa353350e0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0043af06cb92       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   0d77818a442ce       busybox-7dff88458-x4p94
	999ac64245591       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   ab4dd7022ef59       coredns-7c65d6cfc9-mqzbv
	0beb663c1a28f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   2787b4f317bfa       coredns-7c65d6cfc9-4q46c
	ba4c8c97ea090       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   4161eb9de6ddb       storage-provisioner
	bfdacc6be0aee       docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3    6 minutes ago       Running             kindnet-cni               0                   332b05e74370f       kindnet-hz9rk
	514637eeaa812       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   920e405616cde       kube-proxy-hr5qc
	768be9c254101       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   87c6df22f8976       kube-vip-ha-565823
	452c6d19b2de9       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   0ab557e831fb3       kube-controller-manager-ha-565823
	743ae8ccc81f5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   e6c331c3b3439       etcd-ha-565823
	4f25ff314c2e8       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   d851e6de61a68       kube-apiserver-ha-565823
	b28e7b492cfe7       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6c5b082d1924       kube-scheduler-ha-565823
	
	
	==> coredns [0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3] <==
	[INFO] 10.244.1.2:40894 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004450385s
	[INFO] 10.244.1.2:47929 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225565s
	[INFO] 10.244.1.2:51252 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126773s
	[INFO] 10.244.1.2:47545 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126535s
	[INFO] 10.244.1.2:37654 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119814s
	[INFO] 10.244.2.2:44808 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015021s
	[INFO] 10.244.2.2:48775 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001815223s
	[INFO] 10.244.2.2:56148 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132782s
	[INFO] 10.244.2.2:57998 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133493s
	[INFO] 10.244.0.4:39053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087907s
	[INFO] 10.244.0.4:34059 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001091775s
	[INFO] 10.244.1.2:56415 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000835348s
	[INFO] 10.244.1.2:46751 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114147s
	[INFO] 10.244.1.2:35096 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100606s
	[INFO] 10.244.2.2:40358 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136169s
	[INFO] 10.244.2.2:56318 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000204673s
	[INFO] 10.244.0.4:34528 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012651s
	[INFO] 10.244.1.2:56678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145563s
	[INFO] 10.244.1.2:43671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000363816s
	[INFO] 10.244.1.2:48047 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136942s
	[INFO] 10.244.1.2:35425 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00019653s
	[INFO] 10.244.2.2:59862 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112519s
	[INFO] 10.244.0.4:33935 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108695s
	[INFO] 10.244.0.4:51044 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115709s
	[INFO] 10.244.0.4:40489 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092799s
	
	
	==> coredns [999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481] <==
	[INFO] 10.244.0.4:33301 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137834s
	[INFO] 10.244.0.4:55709 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001541208s
	[INFO] 10.244.0.4:59133 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001387137s
	[INFO] 10.244.1.2:35268 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004904013s
	[INFO] 10.244.1.2:45390 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000166839s
	[INFO] 10.244.2.2:51385 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000248421s
	[INFO] 10.244.2.2:33701 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001310625s
	[INFO] 10.244.2.2:48335 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000124081s
	[INFO] 10.244.2.2:58439 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000278252s
	[INFO] 10.244.0.4:51825 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131036s
	[INFO] 10.244.0.4:54179 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001798071s
	[INFO] 10.244.0.4:38851 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094604s
	[INFO] 10.244.0.4:48660 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000050194s
	[INFO] 10.244.0.4:57598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082654s
	[INFO] 10.244.0.4:43576 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000100662s
	[INFO] 10.244.1.2:60988 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015105s
	[INFO] 10.244.2.2:60481 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130341s
	[INFO] 10.244.2.2:48427 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079579s
	[INFO] 10.244.0.4:39760 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227961s
	[INFO] 10.244.0.4:48093 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090061s
	[INFO] 10.244.0.4:37075 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076033s
	[INFO] 10.244.2.2:55165 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000258305s
	[INFO] 10.244.2.2:40866 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000177114s
	[INFO] 10.244.2.2:58880 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137362s
	[INFO] 10.244.0.4:60821 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000179152s
	
	
	==> describe nodes <==
	Name:               ha-565823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_11T23_59_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 11 Dec 2024 23:59:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:38 +0000   Wed, 11 Dec 2024 23:59:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.19
	  Hostname:    ha-565823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 344476ebea784ce5952c6b9d7486bfc2
	  System UUID:                344476eb-ea78-4ce5-952c-6b9d7486bfc2
	  Boot ID:                    cf8379f5-6946-439d-a3d4-fa7d39c2dea7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x4p94              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-7c65d6cfc9-4q46c             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m36s
	  kube-system                 coredns-7c65d6cfc9-mqzbv             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m36s
	  kube-system                 etcd-ha-565823                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m41s
	  kube-system                 kindnet-hz9rk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m37s
	  kube-system                 kube-apiserver-ha-565823             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-controller-manager-ha-565823    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-proxy-hr5qc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 kube-scheduler-ha-565823             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 kube-vip-ha-565823                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m34s  kube-proxy       
	  Normal  Starting                 6m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m41s  kubelet          Node ha-565823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s  kubelet          Node ha-565823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s  kubelet          Node ha-565823 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m37s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  NodeReady                6m19s  kubelet          Node ha-565823 status is now: NodeReady
	  Normal  RegisteredNode           5m39s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	  Normal  RegisteredNode           4m23s  node-controller  Node ha-565823 event: Registered Node ha-565823 in Controller
	
	
	Name:               ha-565823-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_00_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:00:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:03:33 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 12 Dec 2024 00:02:33 +0000   Thu, 12 Dec 2024 00:04:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-565823-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9273c598fccb4678bf93616ea428fab5
	  System UUID:                9273c598-fccb-4678-bf93-616ea428fab5
	  Boot ID:                    73eb7add-f6da-422d-ad45-9773172878c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nsw2n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-ha-565823-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m45s
	  kube-system                 kindnet-kr5js                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m47s
	  kube-system                 kube-apiserver-ha-565823-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-controller-manager-ha-565823-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-p2lsd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m47s
	  kube-system                 kube-scheduler-ha-565823-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-vip-ha-565823-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m43s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m47s (x8 over 5m47s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m47s (x8 over 5m47s)  kubelet          Node ha-565823-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m47s (x7 over 5m47s)  kubelet          Node ha-565823-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m42s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           5m39s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-565823-m02 event: Registered Node ha-565823-m02 in Controller
	  Normal  NodeNotReady             2m2s                   node-controller  Node ha-565823-m02 status is now: NodeNotReady
	
	
	Name:               ha-565823-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_01_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:01:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:02:46 +0000   Thu, 12 Dec 2024 00:02:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    ha-565823-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7cdc3cdb36e495abaa3ddda542ce8f6
	  System UUID:                a7cdc3cd-b36e-495a-baa3-ddda542ce8f6
	  Boot ID:                    e8069ced-7862-4741-8f56-298b003d0b4d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s8nmx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 etcd-ha-565823-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m30s
	  kube-system                 kindnet-jffrr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m32s
	  kube-system                 kube-apiserver-ha-565823-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-controller-manager-ha-565823-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-proxy-klpqs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-scheduler-ha-565823-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-vip-ha-565823-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m32s (x8 over 4m32s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s (x8 over 4m32s)  kubelet          Node ha-565823-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s (x7 over 4m32s)  kubelet          Node ha-565823-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m29s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-565823-m03 event: Registered Node ha-565823-m03 in Controller
	
	
	Name:               ha-565823-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565823-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=ha-565823
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_12T00_02_54_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:02:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565823-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:06:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:02:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:03:25 +0000   Thu, 12 Dec 2024 00:03:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.247
	  Hostname:    ha-565823-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9da6268e700e4cc18f576f10f66d598f
	  System UUID:                9da6268e-700e-4cc1-8f57-6f10f66d598f
	  Boot ID:                    20440ea1-d260-49fc-a678-9a23de1ac4f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-6qk4d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m22s
	  kube-system                 kube-proxy-j59sb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  RegisteredNode           3m22s                  node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m22s (x2 over 3m22s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m22s (x2 over 3m22s)  kubelet          Node ha-565823-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m22s (x2 over 3m22s)  kubelet          Node ha-565823-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-565823-m04 event: Registered Node ha-565823-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-565823-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec11 23:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053078] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041942] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.920910] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Dec11 23:59] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.625477] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.503596] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.061991] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056761] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.187047] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.124910] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.280035] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.149659] systemd-fstab-generator[748]: Ignoring "noauto" option for root device
	[  +4.048783] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.069316] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.737553] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.583447] systemd-fstab-generator[1297]: Ignoring "noauto" option for root device
	[  +5.823487] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.790300] kauditd_printk_skb: 34 callbacks suppressed
	[Dec12 00:00] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b] <==
	{"level":"warn","ts":"2024-12-12T00:06:16.168664Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.176456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.179840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.192023Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.198795Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.206027Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.103:2380/version","remote-member-id":"6696b50e49e4750c","error":"Get \"https://192.168.39.103:2380/version\": dial tcp 192.168.39.103:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-12T00:06:16.206154Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"6696b50e49e4750c","error":"Get \"https://192.168.39.103:2380/version\": dial tcp 192.168.39.103:2380: i/o timeout"}
	{"level":"warn","ts":"2024-12-12T00:06:16.206119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.211412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.217574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.225377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.231021Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.237643Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.241001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.244816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.252424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.261170Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.261368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.270950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.274411Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.276903Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.280221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.285695Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.291370Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-12T00:06:16.356165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"683e1d26ac7e3123","from":"683e1d26ac7e3123","remote-peer-id":"6696b50e49e4750c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:06:16 up 7 min,  0 users,  load average: 0.05, 0.16, 0.09
	Linux ha-565823 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098] <==
	I1212 00:05:37.120430       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119691       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:47.119737       1 main.go:301] handling current node
	I1212 00:05:47.119753       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:47.119758       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:47.119987       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:47.119994       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:47.120217       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:47.120242       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:05:57.128438       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:05:57.128810       1 main.go:301] handling current node
	I1212 00:05:57.128927       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:05:57.128989       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:05:57.129767       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:05:57.129834       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:05:57.130023       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:05:57.130046       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	I1212 00:06:07.120193       1 main.go:297] Handling node with IPs: map[192.168.39.19:{}]
	I1212 00:06:07.120286       1 main.go:301] handling current node
	I1212 00:06:07.120313       1 main.go:297] Handling node with IPs: map[192.168.39.103:{}]
	I1212 00:06:07.120331       1 main.go:324] Node ha-565823-m02 has CIDR [10.244.1.0/24] 
	I1212 00:06:07.120614       1 main.go:297] Handling node with IPs: map[192.168.39.95:{}]
	I1212 00:06:07.120667       1 main.go:324] Node ha-565823-m03 has CIDR [10.244.2.0/24] 
	I1212 00:06:07.120856       1 main.go:297] Handling node with IPs: map[192.168.39.247:{}]
	I1212 00:06:07.120887       1 main.go:324] Node ha-565823-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95] <==
	I1211 23:59:33.823962       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1211 23:59:33.879965       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1211 23:59:33.896294       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.19]
	I1211 23:59:33.897349       1 controller.go:615] quota admission added evaluator for: endpoints
	I1211 23:59:33.902931       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1211 23:59:34.842734       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1211 23:59:35.374409       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1211 23:59:35.395837       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1211 23:59:35.560177       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1211 23:59:39.944410       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1211 23:59:40.344123       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1212 00:02:22.272920       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55802: use of closed network connection
	E1212 00:02:22.464756       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55828: use of closed network connection
	E1212 00:02:22.651355       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55850: use of closed network connection
	E1212 00:02:23.038043       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55874: use of closed network connection
	E1212 00:02:23.226745       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55900: use of closed network connection
	E1212 00:02:23.410000       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55904: use of closed network connection
	E1212 00:02:23.591256       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55924: use of closed network connection
	E1212 00:02:23.770667       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55932: use of closed network connection
	E1212 00:02:24.076679       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55962: use of closed network connection
	E1212 00:02:24.252739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55982: use of closed network connection
	E1212 00:02:24.461578       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56012: use of closed network connection
	E1212 00:02:24.646238       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56034: use of closed network connection
	E1212 00:02:24.817848       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56044: use of closed network connection
	E1212 00:02:24.999617       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:56060: use of closed network connection
	
	
	==> kube-controller-manager [452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1] <==
	I1212 00:02:54.484626       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565823-m04" podCIDRs=["10.244.3.0/24"]
	I1212 00:02:54.484689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.484721       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.500323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.636444       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565823-m04"
	I1212 00:02:54.652045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:54.687694       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:55.082775       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.485970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:57.555718       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.675906       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:02:58.734910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:04.836593       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466024       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:16.466304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:03:16.485293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:17.501671       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:03:25.341676       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m04"
	I1212 00:04:14.668472       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.669356       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565823-m04"
	I1212 00:04:14.705380       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:14.785686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.151428ms"
	I1212 00:04:14.785837       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="78.406µs"
	I1212 00:04:18.764949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	I1212 00:04:19.939887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-565823-m02"
	
	
	==> kube-proxy [514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1211 23:59:41.687183       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1211 23:59:41.713699       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.19"]
	E1211 23:59:41.713883       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1211 23:59:41.760766       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1211 23:59:41.760924       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1211 23:59:41.761009       1 server_linux.go:169] "Using iptables Proxier"
	I1211 23:59:41.764268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1211 23:59:41.765555       1 server.go:483] "Version info" version="v1.31.2"
	I1211 23:59:41.765710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1211 23:59:41.768630       1 config.go:105] "Starting endpoint slice config controller"
	I1211 23:59:41.769016       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1211 23:59:41.769876       1 config.go:199] "Starting service config controller"
	I1211 23:59:41.769889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1211 23:59:41.771229       1 config.go:328] "Starting node config controller"
	I1211 23:59:41.771259       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1211 23:59:41.871443       1 shared_informer.go:320] Caches are synced for node config
	I1211 23:59:41.871633       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1211 23:59:41.871849       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4] <==
	E1211 23:59:33.413263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1211 23:59:35.297693       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1212 00:02:14.658309       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="bc1a3365-d32e-42cc-b58c-95a59e72d54b" pod="default/busybox-7dff88458-nsw2n" assumedNode="ha-565823-m02" currentNode="ha-565823-m03"
	E1212 00:02:14.675240       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m03"
	E1212 00:02:14.679553       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod bc1a3365-d32e-42cc-b58c-95a59e72d54b(default/busybox-7dff88458-nsw2n) was assumed on ha-565823-m03 but assigned to ha-565823-m02" pod="default/busybox-7dff88458-nsw2n"
	E1212 00:02:14.680513       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-nsw2n\": pod busybox-7dff88458-nsw2n is already assigned to node \"ha-565823-m02\"" pod="default/busybox-7dff88458-nsw2n"
	I1212 00:02:14.680708       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-nsw2n" node="ha-565823-m02"
	E1212 00:02:14.899144       1 schedule_one.go:1078] "Error occurred" err="Pod default/busybox-7dff88458-vn6xg is already present in the active queue" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:14.936687       1 schedule_one.go:1106] "Error updating pod" err="pods \"busybox-7dff88458-vn6xg\" not found" pod="default/busybox-7dff88458-vn6xg"
	E1212 00:02:54.574668       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.578200       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.581395       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b52adb65-9292-42b8-bca8-b4a44c756e15(kube-system/kube-proxy-j59sb) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j59sb"
	E1212 00:02:54.582857       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j59sb\": pod kube-proxy-j59sb is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-j59sb"
	I1212 00:02:54.582977       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j59sb" node="ha-565823-m04"
	E1212 00:02:54.583674       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 8ba90dda-f093-4ba3-abad-427394ebe334(kube-system/kindnet-6qk4d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-6qk4d"
	E1212 00:02:54.583943       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-6qk4d\": pod kindnet-6qk4d is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-6qk4d"
	I1212 00:02:54.584002       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6qk4d" node="ha-565823-m04"
	E1212 00:02:54.639291       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.640439       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 2061489e-9108-4e76-af40-2fcc1540357b(kube-system/kube-proxy-lbbhs) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-lbbhs"
	E1212 00:02:54.640623       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-lbbhs\": pod kube-proxy-lbbhs is already assigned to node \"ha-565823-m04\"" pod="kube-system/kube-proxy-lbbhs"
	I1212 00:02:54.640743       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-lbbhs" node="ha-565823-m04"
	E1212 00:02:54.639802       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	E1212 00:02:54.641599       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5bd86f21-f17e-4d19-8bac-53393aecda0b(kube-system/kindnet-pfdgd) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-pfdgd"
	E1212 00:02:54.641728       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pfdgd\": pod kindnet-pfdgd is already assigned to node \"ha-565823-m04\"" pod="kube-system/kindnet-pfdgd"
	I1212 00:02:54.641865       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pfdgd" node="ha-565823-m04"
	
	
	==> kubelet <==
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646672    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:45 ha-565823 kubelet[1304]: E1212 00:04:45.646986    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961885646360837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649177    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:04:55 ha-565823 kubelet[1304]: E1212 00:04:55.649229    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961895648846632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650905    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:05 ha-565823 kubelet[1304]: E1212 00:05:05.650951    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961905650620490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652272    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:15 ha-565823 kubelet[1304]: E1212 00:05:15.652343    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961915651820297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.654671    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:25 ha-565823 kubelet[1304]: E1212 00:05:25.655016    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961925654167907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.529805    1304 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 00:05:35 ha-565823 kubelet[1304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 00:05:35 ha-565823 kubelet[1304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657687    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:35 ha-565823 kubelet[1304]: E1212 00:05:35.657712    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961935657273568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659792    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:45 ha-565823 kubelet[1304]: E1212 00:05:45.659845    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961945659457766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.661887    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:05:55 ha-565823 kubelet[1304]: E1212 00:05:55.662031    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961955661658114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:06:05 ha-565823 kubelet[1304]: E1212 00:06:05.663647    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961965663423234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:06:05 ha-565823 kubelet[1304]: E1212 00:06:05.663687    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961965663423234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:06:15 ha-565823 kubelet[1304]: E1212 00:06:15.666171    1304 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961975665523145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 00:06:15 ha-565823 kubelet[1304]: E1212 00:06:15.666205    1304 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733961975665523145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565823 -n ha-565823
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-565823 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-565823 -v=7 --alsologtostderr
E1212 00:07:46.618203   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:07:55.697662   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-565823 -v=7 --alsologtostderr: exit status 82 (2m1.94938224s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565823-m04"  ...
	* Stopping node "ha-565823-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:06:17.369679  111445 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:06:17.369800  111445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:17.369809  111445 out.go:358] Setting ErrFile to fd 2...
	I1212 00:06:17.369813  111445 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:06:17.370014  111445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:06:17.370255  111445 out.go:352] Setting JSON to false
	I1212 00:06:17.370345  111445 mustload.go:65] Loading cluster: ha-565823
	I1212 00:06:17.370739  111445 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:06:17.370824  111445 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:06:17.371004  111445 mustload.go:65] Loading cluster: ha-565823
	I1212 00:06:17.371136  111445 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:06:17.371163  111445 stop.go:39] StopHost: ha-565823-m04
	I1212 00:06:17.371539  111445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:06:17.371616  111445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:06:17.387363  111445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I1212 00:06:17.387846  111445 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:06:17.388414  111445 main.go:141] libmachine: Using API Version  1
	I1212 00:06:17.388436  111445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:06:17.388771  111445 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:06:17.391137  111445 out.go:177] * Stopping node "ha-565823-m04"  ...
	I1212 00:06:17.392476  111445 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:06:17.392520  111445 main.go:141] libmachine: (ha-565823-m04) Calling .DriverName
	I1212 00:06:17.392736  111445 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:06:17.392779  111445 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHHostname
	I1212 00:06:17.395634  111445 main.go:141] libmachine: (ha-565823-m04) DBG | domain ha-565823-m04 has defined MAC address 52:54:00:4e:34:a1 in network mk-ha-565823
	I1212 00:06:17.395983  111445 main.go:141] libmachine: (ha-565823-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:34:a1", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:02:40 +0000 UTC Type:0 Mac:52:54:00:4e:34:a1 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-565823-m04 Clientid:01:52:54:00:4e:34:a1}
	I1212 00:06:17.396011  111445 main.go:141] libmachine: (ha-565823-m04) DBG | domain ha-565823-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:4e:34:a1 in network mk-ha-565823
	I1212 00:06:17.396153  111445 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHPort
	I1212 00:06:17.396327  111445 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHKeyPath
	I1212 00:06:17.396462  111445 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHUsername
	I1212 00:06:17.396583  111445 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m04/id_rsa Username:docker}
	I1212 00:06:17.484989  111445 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:06:17.541316  111445 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:06:17.598886  111445 main.go:141] libmachine: Stopping "ha-565823-m04"...
	I1212 00:06:17.598919  111445 main.go:141] libmachine: (ha-565823-m04) Calling .GetState
	I1212 00:06:17.600512  111445 main.go:141] libmachine: (ha-565823-m04) Calling .Stop
	I1212 00:06:17.603989  111445 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 0/120
	I1212 00:06:18.849053  111445 main.go:141] libmachine: (ha-565823-m04) Calling .GetState
	I1212 00:06:18.850347  111445 main.go:141] libmachine: Machine "ha-565823-m04" was stopped.
	I1212 00:06:18.850372  111445 stop.go:75] duration metric: took 1.457891346s to stop
	I1212 00:06:18.850394  111445 stop.go:39] StopHost: ha-565823-m03
	I1212 00:06:18.850688  111445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:06:18.850749  111445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:06:18.865452  111445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I1212 00:06:18.865982  111445 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:06:18.866573  111445 main.go:141] libmachine: Using API Version  1
	I1212 00:06:18.866597  111445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:06:18.866899  111445 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:06:18.868884  111445 out.go:177] * Stopping node "ha-565823-m03"  ...
	I1212 00:06:18.870161  111445 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:06:18.870194  111445 main.go:141] libmachine: (ha-565823-m03) Calling .DriverName
	I1212 00:06:18.870415  111445 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:06:18.870443  111445 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHHostname
	I1212 00:06:18.873454  111445 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:06:18.873855  111445 main.go:141] libmachine: (ha-565823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:bd:55", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:01:12 +0000 UTC Type:0 Mac:52:54:00:03:bd:55 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:ha-565823-m03 Clientid:01:52:54:00:03:bd:55}
	I1212 00:06:18.873898  111445 main.go:141] libmachine: (ha-565823-m03) DBG | domain ha-565823-m03 has defined IP address 192.168.39.95 and MAC address 52:54:00:03:bd:55 in network mk-ha-565823
	I1212 00:06:18.874035  111445 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHPort
	I1212 00:06:18.874213  111445 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHKeyPath
	I1212 00:06:18.874395  111445 main.go:141] libmachine: (ha-565823-m03) Calling .GetSSHUsername
	I1212 00:06:18.874588  111445 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m03/id_rsa Username:docker}
	I1212 00:06:18.967204  111445 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:06:19.022449  111445 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:06:19.077648  111445 main.go:141] libmachine: Stopping "ha-565823-m03"...
	I1212 00:06:19.077681  111445 main.go:141] libmachine: (ha-565823-m03) Calling .GetState
	I1212 00:06:19.079391  111445 main.go:141] libmachine: (ha-565823-m03) Calling .Stop
	I1212 00:06:19.082715  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 0/120
	I1212 00:06:20.084630  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 1/120
	I1212 00:06:21.086183  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 2/120
	I1212 00:06:22.087932  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 3/120
	I1212 00:06:23.089450  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 4/120
	I1212 00:06:24.091750  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 5/120
	I1212 00:06:25.094205  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 6/120
	I1212 00:06:26.095684  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 7/120
	I1212 00:06:27.097158  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 8/120
	I1212 00:06:28.098537  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 9/120
	I1212 00:06:29.100632  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 10/120
	I1212 00:06:30.102147  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 11/120
	I1212 00:06:31.103347  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 12/120
	I1212 00:06:32.104855  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 13/120
	I1212 00:06:33.106362  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 14/120
	I1212 00:06:34.108263  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 15/120
	I1212 00:06:35.110451  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 16/120
	I1212 00:06:36.111847  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 17/120
	I1212 00:06:37.113733  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 18/120
	I1212 00:06:38.115063  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 19/120
	I1212 00:06:39.116941  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 20/120
	I1212 00:06:40.118683  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 21/120
	I1212 00:06:41.120304  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 22/120
	I1212 00:06:42.122552  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 23/120
	I1212 00:06:43.124097  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 24/120
	I1212 00:06:44.125955  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 25/120
	I1212 00:06:45.127561  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 26/120
	I1212 00:06:46.129021  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 27/120
	I1212 00:06:47.130444  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 28/120
	I1212 00:06:48.132378  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 29/120
	I1212 00:06:49.134237  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 30/120
	I1212 00:06:50.135777  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 31/120
	I1212 00:06:51.137262  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 32/120
	I1212 00:06:52.138731  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 33/120
	I1212 00:06:53.140235  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 34/120
	I1212 00:06:54.141850  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 35/120
	I1212 00:06:55.143165  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 36/120
	I1212 00:06:56.144394  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 37/120
	I1212 00:06:57.145642  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 38/120
	I1212 00:06:58.146958  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 39/120
	I1212 00:06:59.148420  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 40/120
	I1212 00:07:00.149687  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 41/120
	I1212 00:07:01.150929  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 42/120
	I1212 00:07:02.152235  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 43/120
	I1212 00:07:03.153505  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 44/120
	I1212 00:07:04.154909  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 45/120
	I1212 00:07:05.156345  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 46/120
	I1212 00:07:06.157688  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 47/120
	I1212 00:07:07.159035  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 48/120
	I1212 00:07:08.160280  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 49/120
	I1212 00:07:09.162066  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 50/120
	I1212 00:07:10.163296  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 51/120
	I1212 00:07:11.164619  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 52/120
	I1212 00:07:12.165859  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 53/120
	I1212 00:07:13.167144  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 54/120
	I1212 00:07:14.168823  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 55/120
	I1212 00:07:15.170228  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 56/120
	I1212 00:07:16.171563  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 57/120
	I1212 00:07:17.173007  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 58/120
	I1212 00:07:18.175186  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 59/120
	I1212 00:07:19.176935  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 60/120
	I1212 00:07:20.178380  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 61/120
	I1212 00:07:21.179683  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 62/120
	I1212 00:07:22.181145  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 63/120
	I1212 00:07:23.182541  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 64/120
	I1212 00:07:24.184371  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 65/120
	I1212 00:07:25.185970  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 66/120
	I1212 00:07:26.187339  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 67/120
	I1212 00:07:27.188637  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 68/120
	I1212 00:07:28.190044  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 69/120
	I1212 00:07:29.191622  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 70/120
	I1212 00:07:30.192935  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 71/120
	I1212 00:07:31.194430  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 72/120
	I1212 00:07:32.195712  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 73/120
	I1212 00:07:33.197388  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 74/120
	I1212 00:07:34.199145  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 75/120
	I1212 00:07:35.200760  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 76/120
	I1212 00:07:36.202293  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 77/120
	I1212 00:07:37.203754  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 78/120
	I1212 00:07:38.205248  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 79/120
	I1212 00:07:39.206754  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 80/120
	I1212 00:07:40.208145  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 81/120
	I1212 00:07:41.209494  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 82/120
	I1212 00:07:42.210729  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 83/120
	I1212 00:07:43.212094  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 84/120
	I1212 00:07:44.213874  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 85/120
	I1212 00:07:45.215048  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 86/120
	I1212 00:07:46.216305  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 87/120
	I1212 00:07:47.217638  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 88/120
	I1212 00:07:48.218912  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 89/120
	I1212 00:07:49.220615  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 90/120
	I1212 00:07:50.222588  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 91/120
	I1212 00:07:51.223924  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 92/120
	I1212 00:07:52.225221  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 93/120
	I1212 00:07:53.226519  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 94/120
	I1212 00:07:54.228584  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 95/120
	I1212 00:07:55.229947  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 96/120
	I1212 00:07:56.231741  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 97/120
	I1212 00:07:57.233107  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 98/120
	I1212 00:07:58.234400  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 99/120
	I1212 00:07:59.236155  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 100/120
	I1212 00:08:00.237907  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 101/120
	I1212 00:08:01.239153  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 102/120
	I1212 00:08:02.240415  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 103/120
	I1212 00:08:03.241725  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 104/120
	I1212 00:08:04.243453  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 105/120
	I1212 00:08:05.244677  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 106/120
	I1212 00:08:06.246177  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 107/120
	I1212 00:08:07.247423  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 108/120
	I1212 00:08:08.248689  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 109/120
	I1212 00:08:09.250430  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 110/120
	I1212 00:08:10.251817  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 111/120
	I1212 00:08:11.253037  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 112/120
	I1212 00:08:12.254399  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 113/120
	I1212 00:08:13.255614  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 114/120
	I1212 00:08:14.257194  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 115/120
	I1212 00:08:15.258756  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 116/120
	I1212 00:08:16.260099  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 117/120
	I1212 00:08:17.262115  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 118/120
	I1212 00:08:18.263438  111445 main.go:141] libmachine: (ha-565823-m03) Waiting for machine to stop 119/120
	I1212 00:08:19.264134  111445 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1212 00:08:19.264228  111445 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 00:08:19.266076  111445 out.go:201] 
	W1212 00:08:19.267430  111445 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 00:08:19.267443  111445 out.go:270] * 
	* 
	W1212 00:08:19.270397  111445 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:08:19.271651  111445 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-565823 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565823 --wait=true -v=7 --alsologtostderr
E1212 00:08:23.401588   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-565823 --wait=true -v=7 --alsologtostderr: (4m13.445790991s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-565823
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565823 -n ha-565823
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 logs -n 25: (2.441265901s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m04 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp testdata/cp-test.txt                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m03 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565823 node stop m02 -v=7                                                     | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565823 node start m02 -v=7                                                    | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565823 -v=7                                                           | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-565823 -v=7                                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565823 --wait=true -v=7                                                    | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:08 UTC | 12 Dec 24 00:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565823                                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:08:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:08:19.323108  111958 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:08:19.323212  111958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:19.323216  111958 out.go:358] Setting ErrFile to fd 2...
	I1212 00:08:19.323220  111958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:19.323395  111958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:08:19.323963  111958 out.go:352] Setting JSON to false
	I1212 00:08:19.324829  111958 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10241,"bootTime":1733951858,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:08:19.324937  111958 start.go:139] virtualization: kvm guest
	I1212 00:08:19.327876  111958 out.go:177] * [ha-565823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:08:19.329113  111958 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:08:19.329114  111958 notify.go:220] Checking for updates...
	I1212 00:08:19.331447  111958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:08:19.332595  111958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:08:19.333611  111958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:08:19.334868  111958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:08:19.336193  111958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:08:19.337815  111958 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:08:19.337898  111958 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:08:19.338370  111958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:08:19.338432  111958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:08:19.354334  111958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I1212 00:08:19.354824  111958 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:08:19.355418  111958 main.go:141] libmachine: Using API Version  1
	I1212 00:08:19.355451  111958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:08:19.355911  111958 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:08:19.356107  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:08:19.392520  111958 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:08:19.393859  111958 start.go:297] selected driver: kvm2
	I1212 00:08:19.393871  111958 start.go:901] validating driver "kvm2" against &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:08:19.394046  111958 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:08:19.394485  111958 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:08:19.394583  111958 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:08:19.409501  111958 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:08:19.410200  111958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:08:19.410234  111958 cni.go:84] Creating CNI manager for ""
	I1212 00:08:19.410298  111958 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 00:08:19.410360  111958 start.go:340] cluster config:
	{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:08:19.410552  111958 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:08:19.412989  111958 out.go:177] * Starting "ha-565823" primary control-plane node in "ha-565823" cluster
	I1212 00:08:19.414475  111958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:08:19.414513  111958 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:08:19.414521  111958 cache.go:56] Caching tarball of preloaded images
	I1212 00:08:19.414607  111958 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:08:19.414619  111958 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:08:19.414735  111958 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:08:19.414915  111958 start.go:360] acquireMachinesLock for ha-565823: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:08:19.414956  111958 start.go:364] duration metric: took 23.741µs to acquireMachinesLock for "ha-565823"
	I1212 00:08:19.414970  111958 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:08:19.414979  111958 fix.go:54] fixHost starting: 
	I1212 00:08:19.415232  111958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:08:19.415266  111958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:08:19.429477  111958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I1212 00:08:19.430000  111958 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:08:19.430524  111958 main.go:141] libmachine: Using API Version  1
	I1212 00:08:19.430543  111958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:08:19.430889  111958 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:08:19.431095  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:08:19.431240  111958 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:08:19.432889  111958 fix.go:112] recreateIfNeeded on ha-565823: state=Running err=<nil>
	W1212 00:08:19.432921  111958 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:08:19.434710  111958 out.go:177] * Updating the running kvm2 "ha-565823" VM ...
	I1212 00:08:19.435824  111958 machine.go:93] provisionDockerMachine start ...
	I1212 00:08:19.435844  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:08:19.436048  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.438367  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.438871  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.438898  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.439025  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:19.439181  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.439314  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.439444  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:19.439625  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:19.439836  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:19.439851  111958 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 00:08:19.560934  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1212 00:08:19.560977  111958 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1212 00:08:19.561235  111958 buildroot.go:166] provisioning hostname "ha-565823"
	I1212 00:08:19.561262  111958 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1212 00:08:19.561427  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.564047  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.564515  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.564534  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.564701  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:19.564860  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.564999  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.565134  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:19.565303  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:19.565470  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:19.565481  111958 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823 && echo "ha-565823" | sudo tee /etc/hostname
	I1212 00:08:19.695524  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1212 00:08:19.695557  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.698248  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.698649  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.698680  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.698854  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:19.699063  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.699238  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.699374  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:19.699539  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:19.699744  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:19.699761  111958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:08:19.812641  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:08:19.812700  111958 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:08:19.812741  111958 buildroot.go:174] setting up certificates
	I1212 00:08:19.812753  111958 provision.go:84] configureAuth start
	I1212 00:08:19.812771  111958 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1212 00:08:19.813051  111958 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1212 00:08:19.815879  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.816258  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.816286  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.816422  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.818759  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.819124  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.819149  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.819311  111958 provision.go:143] copyHostCerts
	I1212 00:08:19.819337  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:08:19.819389  111958 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:08:19.819405  111958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:08:19.819471  111958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:08:19.819559  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:08:19.819579  111958 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:08:19.819604  111958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:08:19.819633  111958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:08:19.819677  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:08:19.819693  111958 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:08:19.819699  111958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:08:19.819736  111958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:08:19.819792  111958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823 san=[127.0.0.1 192.168.39.19 ha-565823 localhost minikube]
	I1212 00:08:20.141095  111958 provision.go:177] copyRemoteCerts
	I1212 00:08:20.141179  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:08:20.141218  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:20.143890  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.144181  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:20.144206  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.144380  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:20.144583  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:20.144749  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:20.144885  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:08:20.230719  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:08:20.230784  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:08:20.256713  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:08:20.256811  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:08:20.283669  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:08:20.283747  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1212 00:08:20.309687  111958 provision.go:87] duration metric: took 496.889277ms to configureAuth
	I1212 00:08:20.309728  111958 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:08:20.310010  111958 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:08:20.310109  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:20.312697  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.313083  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:20.313106  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.313251  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:20.313456  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:20.313614  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:20.313755  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:20.313902  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:20.314100  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:20.314124  111958 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:09:51.280947  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:09:51.280977  111958 machine.go:96] duration metric: took 1m31.845138426s to provisionDockerMachine
	I1212 00:09:51.280994  111958 start.go:293] postStartSetup for "ha-565823" (driver="kvm2")
	I1212 00:09:51.281022  111958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:09:51.281049  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.281387  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:09:51.281433  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.284708  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.285148  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.285208  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.285414  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.285585  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.285729  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.285833  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:09:51.376524  111958 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:09:51.381146  111958 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:09:51.381173  111958 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:09:51.381242  111958 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:09:51.381347  111958 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:09:51.381362  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:09:51.381444  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:09:51.391488  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:09:51.415333  111958 start.go:296] duration metric: took 134.321373ms for postStartSetup
	I1212 00:09:51.415395  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.415678  111958 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1212 00:09:51.415708  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.418476  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.418873  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.418899  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.419055  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.419220  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.419354  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.419497  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	W1212 00:09:51.506306  111958 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1212 00:09:51.506349  111958 fix.go:56] duration metric: took 1m32.091369328s for fixHost
	I1212 00:09:51.506381  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.509122  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.509533  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.509563  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.509744  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.509939  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.510120  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.510259  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.510398  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:09:51.510626  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:09:51.510640  111958 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:09:51.620479  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733962191.576165122
	
	I1212 00:09:51.620503  111958 fix.go:216] guest clock: 1733962191.576165122
	I1212 00:09:51.620511  111958 fix.go:229] Guest: 2024-12-12 00:09:51.576165122 +0000 UTC Remote: 2024-12-12 00:09:51.506365694 +0000 UTC m=+92.222603569 (delta=69.799428ms)
	I1212 00:09:51.620531  111958 fix.go:200] guest clock delta is within tolerance: 69.799428ms
	I1212 00:09:51.620536  111958 start.go:83] releasing machines lock for "ha-565823", held for 1m32.205570525s
	I1212 00:09:51.620557  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.620816  111958 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1212 00:09:51.623417  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.623820  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.623847  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.624070  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.624586  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.624759  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.624844  111958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:09:51.624906  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.624961  111958 ssh_runner.go:195] Run: cat /version.json
	I1212 00:09:51.624984  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.627332  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.627696  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.627724  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.627743  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.627855  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.628004  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.628133  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.628168  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.628201  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.628295  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.628305  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:09:51.628429  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.628555  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.628654  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:09:51.709370  111958 ssh_runner.go:195] Run: systemctl --version
	I1212 00:09:51.741922  111958 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:09:51.905259  111958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:09:51.914637  111958 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:09:51.914713  111958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:09:51.924724  111958 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:09:51.924747  111958 start.go:495] detecting cgroup driver to use...
	I1212 00:09:51.924820  111958 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:09:51.942269  111958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:09:51.956194  111958 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:09:51.956259  111958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:09:51.970004  111958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:09:51.983825  111958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:09:52.126599  111958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:09:52.270697  111958 docker.go:233] disabling docker service ...
	I1212 00:09:52.270784  111958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:09:52.288328  111958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:09:52.302507  111958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:09:52.449241  111958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:09:52.600338  111958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:09:52.614800  111958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:09:52.633872  111958 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:09:52.633943  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.644517  111958 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:09:52.644568  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.654959  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.665228  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.675503  111958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:09:52.686342  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.696649  111958 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.707887  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.718023  111958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:09:52.727269  111958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:09:52.736575  111958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:09:52.881471  111958 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:09:55.954858  111958 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.073349842s)
	I1212 00:09:55.954898  111958 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:09:55.954949  111958 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:09:55.960197  111958 start.go:563] Will wait 60s for crictl version
	I1212 00:09:55.960246  111958 ssh_runner.go:195] Run: which crictl
	I1212 00:09:55.964101  111958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:09:56.005913  111958 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:09:56.006014  111958 ssh_runner.go:195] Run: crio --version
	I1212 00:09:56.036798  111958 ssh_runner.go:195] Run: crio --version
	I1212 00:09:56.073409  111958 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:09:56.074634  111958 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1212 00:09:56.077164  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:56.077459  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:56.077489  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:56.077692  111958 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:09:56.082715  111958 kubeadm.go:883] updating cluster {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:09:56.082849  111958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:09:56.082894  111958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:09:56.130586  111958 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:09:56.130616  111958 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:09:56.130674  111958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:09:56.167602  111958 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:09:56.167630  111958 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:09:56.167641  111958 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.2 crio true true} ...
	I1212 00:09:56.167762  111958 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:09:56.167829  111958 ssh_runner.go:195] Run: crio config
	I1212 00:09:56.221044  111958 cni.go:84] Creating CNI manager for ""
	I1212 00:09:56.221071  111958 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 00:09:56.221085  111958 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:09:56.221117  111958 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565823 NodeName:ha-565823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:09:56.221268  111958 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:09:56.221296  111958 kube-vip.go:115] generating kube-vip config ...
	I1212 00:09:56.221341  111958 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:09:56.233595  111958 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:09:56.233702  111958 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:09:56.233773  111958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:09:56.244006  111958 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:09:56.244072  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 00:09:56.253838  111958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1212 00:09:56.270881  111958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:09:56.287539  111958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1212 00:09:56.304023  111958 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:09:56.322581  111958 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:09:56.326966  111958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:09:56.475793  111958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:09:56.492303  111958 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.19
	I1212 00:09:56.492357  111958 certs.go:194] generating shared ca certs ...
	I1212 00:09:56.492380  111958 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:09:56.492591  111958 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:09:56.492644  111958 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:09:56.492656  111958 certs.go:256] generating profile certs ...
	I1212 00:09:56.492766  111958 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:09:56.492806  111958 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78
	I1212 00:09:56.492828  111958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.95 192.168.39.254]
	I1212 00:09:56.738298  111958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78 ...
	I1212 00:09:56.738330  111958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78: {Name:mk1e8e71efdd15b42075d34253ef028b61765a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:09:56.738499  111958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78 ...
	I1212 00:09:56.738510  111958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78: {Name:mk81dc41dced38bb672aa7ab62b58cd540312f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:09:56.738591  111958 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:09:56.738733  111958 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:09:56.738858  111958 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:09:56.738875  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:09:56.738888  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:09:56.738901  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:09:56.738918  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:09:56.738931  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:09:56.738951  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:09:56.738963  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:09:56.738975  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:09:56.739028  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:09:56.739067  111958 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:09:56.739078  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:09:56.739101  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:09:56.739123  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:09:56.739143  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:09:56.739180  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:09:56.739208  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:09:56.739231  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:09:56.739243  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:56.739900  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:09:56.765092  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:09:56.789524  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:09:56.813936  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:09:56.837827  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 00:09:56.861502  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:09:56.900136  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:09:56.938533  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:09:56.962282  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:09:56.985687  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:09:57.009046  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:09:57.032175  111958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:09:57.048754  111958 ssh_runner.go:195] Run: openssl version
	I1212 00:09:57.054878  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:09:57.065994  111958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:09:57.070501  111958 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:09:57.070557  111958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:09:57.076109  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:09:57.085834  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:09:57.097002  111958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:57.101339  111958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:57.101385  111958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:57.106991  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:09:57.116929  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:09:57.128441  111958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:09:57.133075  111958 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:09:57.133113  111958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:09:57.138984  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:09:57.148701  111958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:09:57.153138  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:09:57.158892  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:09:57.164626  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:09:57.170141  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:09:57.175969  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:09:57.181552  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:09:57.187134  111958 kubeadm.go:392] StartCluster: {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:09:57.187280  111958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:09:57.187337  111958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:09:57.225417  111958 cri.go:89] found id: "5a35809e3509198342321446b137c2ec81b705d2d75f3d45649231a9834f9c8f"
	I1212 00:09:57.225444  111958 cri.go:89] found id: "50dabc2311179ad90a354053055125ab2c7053eeec2d9ffa191f4c933f3284c6"
	I1212 00:09:57.225450  111958 cri.go:89] found id: "8049dfebf9c9fe178ac072006401ab999e3752f4dad344eec6ced3f1c75bd004"
	I1212 00:09:57.225454  111958 cri.go:89] found id: "b4a684e30d3ad22203c06045f00cef118a261bfa08f332883d58e350c0395cc3"
	I1212 00:09:57.225457  111958 cri.go:89] found id: "999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481"
	I1212 00:09:57.225460  111958 cri.go:89] found id: "0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3"
	I1212 00:09:57.225463  111958 cri.go:89] found id: "bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098"
	I1212 00:09:57.225468  111958 cri.go:89] found id: "514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57"
	I1212 00:09:57.225473  111958 cri.go:89] found id: "768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778"
	I1212 00:09:57.225482  111958 cri.go:89] found id: "452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1"
	I1212 00:09:57.225487  111958 cri.go:89] found id: "743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b"
	I1212 00:09:57.225490  111958 cri.go:89] found id: "4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95"
	I1212 00:09:57.225492  111958 cri.go:89] found id: "b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4"
	I1212 00:09:57.225495  111958 cri.go:89] found id: ""
	I1212 00:09:57.225538  111958 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565823 -n ha-565823
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 stop -v=7 --alsologtostderr
E1212 00:12:55.697839   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565823 stop -v=7 --alsologtostderr: exit status 82 (2m0.48231249s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565823-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:12:53.219902  113818 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:12:53.220002  113818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:12:53.220010  113818 out.go:358] Setting ErrFile to fd 2...
	I1212 00:12:53.220013  113818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:12:53.220232  113818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:12:53.220456  113818 out.go:352] Setting JSON to false
	I1212 00:12:53.220535  113818 mustload.go:65] Loading cluster: ha-565823
	I1212 00:12:53.220931  113818 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:12:53.221037  113818 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:12:53.221219  113818 mustload.go:65] Loading cluster: ha-565823
	I1212 00:12:53.221352  113818 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:12:53.221383  113818 stop.go:39] StopHost: ha-565823-m04
	I1212 00:12:53.221769  113818 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:12:53.221816  113818 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:12:53.236589  113818 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45231
	I1212 00:12:53.237088  113818 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:12:53.237708  113818 main.go:141] libmachine: Using API Version  1
	I1212 00:12:53.237735  113818 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:12:53.238057  113818 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:12:53.240320  113818 out.go:177] * Stopping node "ha-565823-m04"  ...
	I1212 00:12:53.241659  113818 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:12:53.241696  113818 main.go:141] libmachine: (ha-565823-m04) Calling .DriverName
	I1212 00:12:53.241896  113818 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:12:53.241931  113818 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHHostname
	I1212 00:12:53.244779  113818 main.go:141] libmachine: (ha-565823-m04) DBG | domain ha-565823-m04 has defined MAC address 52:54:00:4e:34:a1 in network mk-ha-565823
	I1212 00:12:53.245192  113818 main.go:141] libmachine: (ha-565823-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:34:a1", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 01:12:20 +0000 UTC Type:0 Mac:52:54:00:4e:34:a1 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-565823-m04 Clientid:01:52:54:00:4e:34:a1}
	I1212 00:12:53.245218  113818 main.go:141] libmachine: (ha-565823-m04) DBG | domain ha-565823-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:4e:34:a1 in network mk-ha-565823
	I1212 00:12:53.245326  113818 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHPort
	I1212 00:12:53.245456  113818 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHKeyPath
	I1212 00:12:53.245584  113818 main.go:141] libmachine: (ha-565823-m04) Calling .GetSSHUsername
	I1212 00:12:53.245729  113818 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823-m04/id_rsa Username:docker}
	I1212 00:12:53.335503  113818 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:12:53.388946  113818 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:12:53.442004  113818 main.go:141] libmachine: Stopping "ha-565823-m04"...
	I1212 00:12:53.442038  113818 main.go:141] libmachine: (ha-565823-m04) Calling .GetState
	I1212 00:12:53.443760  113818 main.go:141] libmachine: (ha-565823-m04) Calling .Stop
	I1212 00:12:53.447329  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 0/120
	I1212 00:12:54.448895  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 1/120
	I1212 00:12:55.450254  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 2/120
	I1212 00:12:56.451906  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 3/120
	I1212 00:12:57.453251  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 4/120
	I1212 00:12:58.454977  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 5/120
	I1212 00:12:59.456470  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 6/120
	I1212 00:13:00.458066  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 7/120
	I1212 00:13:01.459272  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 8/120
	I1212 00:13:02.460635  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 9/120
	I1212 00:13:03.463029  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 10/120
	I1212 00:13:04.464571  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 11/120
	I1212 00:13:05.465827  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 12/120
	I1212 00:13:06.467226  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 13/120
	I1212 00:13:07.468585  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 14/120
	I1212 00:13:08.470430  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 15/120
	I1212 00:13:09.471904  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 16/120
	I1212 00:13:10.473643  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 17/120
	I1212 00:13:11.474934  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 18/120
	I1212 00:13:12.476260  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 19/120
	I1212 00:13:13.478263  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 20/120
	I1212 00:13:14.479695  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 21/120
	I1212 00:13:15.481068  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 22/120
	I1212 00:13:16.482698  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 23/120
	I1212 00:13:17.484216  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 24/120
	I1212 00:13:18.486539  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 25/120
	I1212 00:13:19.487966  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 26/120
	I1212 00:13:20.490274  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 27/120
	I1212 00:13:21.491649  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 28/120
	I1212 00:13:22.492935  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 29/120
	I1212 00:13:23.495077  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 30/120
	I1212 00:13:24.496566  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 31/120
	I1212 00:13:25.498848  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 32/120
	I1212 00:13:26.500339  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 33/120
	I1212 00:13:27.502075  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 34/120
	I1212 00:13:28.504160  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 35/120
	I1212 00:13:29.505387  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 36/120
	I1212 00:13:30.507648  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 37/120
	I1212 00:13:31.508921  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 38/120
	I1212 00:13:32.510466  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 39/120
	I1212 00:13:33.512573  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 40/120
	I1212 00:13:34.514003  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 41/120
	I1212 00:13:35.515444  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 42/120
	I1212 00:13:36.517098  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 43/120
	I1212 00:13:37.519629  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 44/120
	I1212 00:13:38.521609  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 45/120
	I1212 00:13:39.522982  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 46/120
	I1212 00:13:40.524201  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 47/120
	I1212 00:13:41.525992  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 48/120
	I1212 00:13:42.527409  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 49/120
	I1212 00:13:43.529287  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 50/120
	I1212 00:13:44.530722  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 51/120
	I1212 00:13:45.532247  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 52/120
	I1212 00:13:46.534577  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 53/120
	I1212 00:13:47.536126  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 54/120
	I1212 00:13:48.537890  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 55/120
	I1212 00:13:49.539372  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 56/120
	I1212 00:13:50.540728  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 57/120
	I1212 00:13:51.542044  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 58/120
	I1212 00:13:52.543400  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 59/120
	I1212 00:13:53.545861  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 60/120
	I1212 00:13:54.547560  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 61/120
	I1212 00:13:55.549858  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 62/120
	I1212 00:13:56.551573  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 63/120
	I1212 00:13:57.554083  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 64/120
	I1212 00:13:58.556401  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 65/120
	I1212 00:13:59.558177  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 66/120
	I1212 00:14:00.559471  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 67/120
	I1212 00:14:01.560814  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 68/120
	I1212 00:14:02.563333  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 69/120
	I1212 00:14:03.565389  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 70/120
	I1212 00:14:04.566843  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 71/120
	I1212 00:14:05.568492  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 72/120
	I1212 00:14:06.570096  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 73/120
	I1212 00:14:07.572727  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 74/120
	I1212 00:14:08.574645  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 75/120
	I1212 00:14:09.576089  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 76/120
	I1212 00:14:10.577536  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 77/120
	I1212 00:14:11.578957  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 78/120
	I1212 00:14:12.580459  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 79/120
	I1212 00:14:13.582555  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 80/120
	I1212 00:14:14.583911  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 81/120
	I1212 00:14:15.586271  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 82/120
	I1212 00:14:16.587891  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 83/120
	I1212 00:14:17.590054  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 84/120
	I1212 00:14:18.591782  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 85/120
	I1212 00:14:19.593056  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 86/120
	I1212 00:14:20.594517  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 87/120
	I1212 00:14:21.595857  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 88/120
	I1212 00:14:22.597456  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 89/120
	I1212 00:14:23.599480  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 90/120
	I1212 00:14:24.600837  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 91/120
	I1212 00:14:25.602264  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 92/120
	I1212 00:14:26.603457  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 93/120
	I1212 00:14:27.604771  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 94/120
	I1212 00:14:28.606301  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 95/120
	I1212 00:14:29.607634  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 96/120
	I1212 00:14:30.608978  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 97/120
	I1212 00:14:31.610519  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 98/120
	I1212 00:14:32.612116  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 99/120
	I1212 00:14:33.614260  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 100/120
	I1212 00:14:34.615351  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 101/120
	I1212 00:14:35.616718  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 102/120
	I1212 00:14:36.618201  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 103/120
	I1212 00:14:37.619731  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 104/120
	I1212 00:14:38.621892  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 105/120
	I1212 00:14:39.623442  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 106/120
	I1212 00:14:40.624761  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 107/120
	I1212 00:14:41.626597  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 108/120
	I1212 00:14:42.628094  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 109/120
	I1212 00:14:43.630375  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 110/120
	I1212 00:14:44.632475  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 111/120
	I1212 00:14:45.633972  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 112/120
	I1212 00:14:46.635298  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 113/120
	I1212 00:14:47.636544  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 114/120
	I1212 00:14:48.637877  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 115/120
	I1212 00:14:49.639196  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 116/120
	I1212 00:14:50.641704  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 117/120
	I1212 00:14:51.643003  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 118/120
	I1212 00:14:52.644992  113818 main.go:141] libmachine: (ha-565823-m04) Waiting for machine to stop 119/120
	I1212 00:14:53.645583  113818 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1212 00:14:53.645642  113818 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 00:14:53.647510  113818 out.go:201] 
	W1212 00:14:53.648789  113818 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 00:14:53.648813  113818 out.go:270] * 
	* 
	W1212 00:14:53.652218  113818 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:14:53.653698  113818 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-565823 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr: (19.088994644s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565823 -n ha-565823
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 logs -n 25: (2.137085254s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m04 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp testdata/cp-test.txt                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823 sudo cat                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m02 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n                                                                 | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | ha-565823-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565823 ssh -n ha-565823-m03 sudo cat                                          | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC | 12 Dec 24 00:03 UTC |
	|         | /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565823 node stop m02 -v=7                                                     | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:03 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565823 node start m02 -v=7                                                    | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565823 -v=7                                                           | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-565823 -v=7                                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:06 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565823 --wait=true -v=7                                                    | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:08 UTC | 12 Dec 24 00:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565823                                                                | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC |                     |
	| node    | ha-565823 node delete m03 -v=7                                                   | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC | 12 Dec 24 00:12 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-565823 stop -v=7                                                              | ha-565823 | jenkins | v1.34.0 | 12 Dec 24 00:12 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:08:19
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:08:19.323108  111958 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:08:19.323212  111958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:19.323216  111958 out.go:358] Setting ErrFile to fd 2...
	I1212 00:08:19.323220  111958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:08:19.323395  111958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:08:19.323963  111958 out.go:352] Setting JSON to false
	I1212 00:08:19.324829  111958 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10241,"bootTime":1733951858,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:08:19.324937  111958 start.go:139] virtualization: kvm guest
	I1212 00:08:19.327876  111958 out.go:177] * [ha-565823] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:08:19.329113  111958 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:08:19.329114  111958 notify.go:220] Checking for updates...
	I1212 00:08:19.331447  111958 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:08:19.332595  111958 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:08:19.333611  111958 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:08:19.334868  111958 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:08:19.336193  111958 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:08:19.337815  111958 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:08:19.337898  111958 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:08:19.338370  111958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:08:19.338432  111958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:08:19.354334  111958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I1212 00:08:19.354824  111958 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:08:19.355418  111958 main.go:141] libmachine: Using API Version  1
	I1212 00:08:19.355451  111958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:08:19.355911  111958 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:08:19.356107  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:08:19.392520  111958 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:08:19.393859  111958 start.go:297] selected driver: kvm2
	I1212 00:08:19.393871  111958 start.go:901] validating driver "kvm2" against &{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:08:19.394046  111958 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:08:19.394485  111958 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:08:19.394583  111958 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:08:19.409501  111958 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:08:19.410200  111958 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:08:19.410234  111958 cni.go:84] Creating CNI manager for ""
	I1212 00:08:19.410298  111958 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 00:08:19.410360  111958 start.go:340] cluster config:
	{Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:08:19.410552  111958 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:08:19.412989  111958 out.go:177] * Starting "ha-565823" primary control-plane node in "ha-565823" cluster
	I1212 00:08:19.414475  111958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:08:19.414513  111958 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:08:19.414521  111958 cache.go:56] Caching tarball of preloaded images
	I1212 00:08:19.414607  111958 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:08:19.414619  111958 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:08:19.414735  111958 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/config.json ...
	I1212 00:08:19.414915  111958 start.go:360] acquireMachinesLock for ha-565823: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:08:19.414956  111958 start.go:364] duration metric: took 23.741µs to acquireMachinesLock for "ha-565823"
	I1212 00:08:19.414970  111958 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:08:19.414979  111958 fix.go:54] fixHost starting: 
	I1212 00:08:19.415232  111958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:08:19.415266  111958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:08:19.429477  111958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40391
	I1212 00:08:19.430000  111958 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:08:19.430524  111958 main.go:141] libmachine: Using API Version  1
	I1212 00:08:19.430543  111958 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:08:19.430889  111958 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:08:19.431095  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:08:19.431240  111958 main.go:141] libmachine: (ha-565823) Calling .GetState
	I1212 00:08:19.432889  111958 fix.go:112] recreateIfNeeded on ha-565823: state=Running err=<nil>
	W1212 00:08:19.432921  111958 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:08:19.434710  111958 out.go:177] * Updating the running kvm2 "ha-565823" VM ...
	I1212 00:08:19.435824  111958 machine.go:93] provisionDockerMachine start ...
	I1212 00:08:19.435844  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:08:19.436048  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.438367  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.438871  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.438898  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.439025  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:19.439181  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.439314  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.439444  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:19.439625  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:19.439836  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:19.439851  111958 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 00:08:19.560934  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1212 00:08:19.560977  111958 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1212 00:08:19.561235  111958 buildroot.go:166] provisioning hostname "ha-565823"
	I1212 00:08:19.561262  111958 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1212 00:08:19.561427  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.564047  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.564515  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.564534  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.564701  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:19.564860  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.564999  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.565134  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:19.565303  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:19.565470  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:19.565481  111958 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565823 && echo "ha-565823" | sudo tee /etc/hostname
	I1212 00:08:19.695524  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565823
	
	I1212 00:08:19.695557  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.698248  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.698649  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.698680  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.698854  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:19.699063  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.699238  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:19.699374  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:19.699539  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:19.699744  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:19.699761  111958 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565823/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:08:19.812641  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:08:19.812700  111958 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:08:19.812741  111958 buildroot.go:174] setting up certificates
	I1212 00:08:19.812753  111958 provision.go:84] configureAuth start
	I1212 00:08:19.812771  111958 main.go:141] libmachine: (ha-565823) Calling .GetMachineName
	I1212 00:08:19.813051  111958 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1212 00:08:19.815879  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.816258  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.816286  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.816422  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:19.818759  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.819124  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:19.819149  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:19.819311  111958 provision.go:143] copyHostCerts
	I1212 00:08:19.819337  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:08:19.819389  111958 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:08:19.819405  111958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:08:19.819471  111958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:08:19.819559  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:08:19.819579  111958 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:08:19.819604  111958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:08:19.819633  111958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:08:19.819677  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:08:19.819693  111958 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:08:19.819699  111958 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:08:19.819736  111958 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:08:19.819792  111958 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.ha-565823 san=[127.0.0.1 192.168.39.19 ha-565823 localhost minikube]
	I1212 00:08:20.141095  111958 provision.go:177] copyRemoteCerts
	I1212 00:08:20.141179  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:08:20.141218  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:20.143890  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.144181  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:20.144206  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.144380  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:20.144583  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:20.144749  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:20.144885  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:08:20.230719  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:08:20.230784  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:08:20.256713  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:08:20.256811  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:08:20.283669  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:08:20.283747  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1212 00:08:20.309687  111958 provision.go:87] duration metric: took 496.889277ms to configureAuth
	I1212 00:08:20.309728  111958 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:08:20.310010  111958 config.go:182] Loaded profile config "ha-565823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:08:20.310109  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:08:20.312697  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.313083  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:08:20.313106  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:08:20.313251  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:08:20.313456  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:20.313614  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:08:20.313755  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:08:20.313902  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:08:20.314100  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:08:20.314124  111958 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:09:51.280947  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:09:51.280977  111958 machine.go:96] duration metric: took 1m31.845138426s to provisionDockerMachine
	I1212 00:09:51.280994  111958 start.go:293] postStartSetup for "ha-565823" (driver="kvm2")
	I1212 00:09:51.281022  111958 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:09:51.281049  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.281387  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:09:51.281433  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.284708  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.285148  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.285208  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.285414  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.285585  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.285729  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.285833  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:09:51.376524  111958 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:09:51.381146  111958 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:09:51.381173  111958 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:09:51.381242  111958 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:09:51.381347  111958 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:09:51.381362  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:09:51.381444  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:09:51.391488  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:09:51.415333  111958 start.go:296] duration metric: took 134.321373ms for postStartSetup
	I1212 00:09:51.415395  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.415678  111958 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1212 00:09:51.415708  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.418476  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.418873  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.418899  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.419055  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.419220  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.419354  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.419497  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	W1212 00:09:51.506306  111958 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1212 00:09:51.506349  111958 fix.go:56] duration metric: took 1m32.091369328s for fixHost
	I1212 00:09:51.506381  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.509122  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.509533  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.509563  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.509744  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.509939  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.510120  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.510259  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.510398  111958 main.go:141] libmachine: Using SSH client type: native
	I1212 00:09:51.510626  111958 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.19 22 <nil> <nil>}
	I1212 00:09:51.510640  111958 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:09:51.620479  111958 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733962191.576165122
	
	I1212 00:09:51.620503  111958 fix.go:216] guest clock: 1733962191.576165122
	I1212 00:09:51.620511  111958 fix.go:229] Guest: 2024-12-12 00:09:51.576165122 +0000 UTC Remote: 2024-12-12 00:09:51.506365694 +0000 UTC m=+92.222603569 (delta=69.799428ms)
	I1212 00:09:51.620531  111958 fix.go:200] guest clock delta is within tolerance: 69.799428ms
	I1212 00:09:51.620536  111958 start.go:83] releasing machines lock for "ha-565823", held for 1m32.205570525s
	I1212 00:09:51.620557  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.620816  111958 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1212 00:09:51.623417  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.623820  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.623847  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.624070  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.624586  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.624759  111958 main.go:141] libmachine: (ha-565823) Calling .DriverName
	I1212 00:09:51.624844  111958 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:09:51.624906  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.624961  111958 ssh_runner.go:195] Run: cat /version.json
	I1212 00:09:51.624984  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHHostname
	I1212 00:09:51.627332  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.627696  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.627724  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.627743  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.627855  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.628004  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.628133  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.628168  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:51.628201  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:51.628295  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHPort
	I1212 00:09:51.628305  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:09:51.628429  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHKeyPath
	I1212 00:09:51.628555  111958 main.go:141] libmachine: (ha-565823) Calling .GetSSHUsername
	I1212 00:09:51.628654  111958 sshutil.go:53] new ssh client: &{IP:192.168.39.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/ha-565823/id_rsa Username:docker}
	I1212 00:09:51.709370  111958 ssh_runner.go:195] Run: systemctl --version
	I1212 00:09:51.741922  111958 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:09:51.905259  111958 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:09:51.914637  111958 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:09:51.914713  111958 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:09:51.924724  111958 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:09:51.924747  111958 start.go:495] detecting cgroup driver to use...
	I1212 00:09:51.924820  111958 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:09:51.942269  111958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:09:51.956194  111958 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:09:51.956259  111958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:09:51.970004  111958 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:09:51.983825  111958 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:09:52.126599  111958 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:09:52.270697  111958 docker.go:233] disabling docker service ...
	I1212 00:09:52.270784  111958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:09:52.288328  111958 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:09:52.302507  111958 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:09:52.449241  111958 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:09:52.600338  111958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:09:52.614800  111958 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:09:52.633872  111958 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:09:52.633943  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.644517  111958 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:09:52.644568  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.654959  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.665228  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.675503  111958 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:09:52.686342  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.696649  111958 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.707887  111958 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:09:52.718023  111958 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:09:52.727269  111958 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:09:52.736575  111958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:09:52.881471  111958 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:09:55.954858  111958 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.073349842s)
	I1212 00:09:55.954898  111958 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:09:55.954949  111958 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:09:55.960197  111958 start.go:563] Will wait 60s for crictl version
	I1212 00:09:55.960246  111958 ssh_runner.go:195] Run: which crictl
	I1212 00:09:55.964101  111958 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:09:56.005913  111958 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:09:56.006014  111958 ssh_runner.go:195] Run: crio --version
	I1212 00:09:56.036798  111958 ssh_runner.go:195] Run: crio --version
	I1212 00:09:56.073409  111958 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:09:56.074634  111958 main.go:141] libmachine: (ha-565823) Calling .GetIP
	I1212 00:09:56.077164  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:56.077459  111958 main.go:141] libmachine: (ha-565823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:2e:da", ip: ""} in network mk-ha-565823: {Iface:virbr1 ExpiryTime:2024-12-12 00:59:05 +0000 UTC Type:0 Mac:52:54:00:2b:2e:da Iaid: IPaddr:192.168.39.19 Prefix:24 Hostname:ha-565823 Clientid:01:52:54:00:2b:2e:da}
	I1212 00:09:56.077489  111958 main.go:141] libmachine: (ha-565823) DBG | domain ha-565823 has defined IP address 192.168.39.19 and MAC address 52:54:00:2b:2e:da in network mk-ha-565823
	I1212 00:09:56.077692  111958 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:09:56.082715  111958 kubeadm.go:883] updating cluster {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:09:56.082849  111958 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:09:56.082894  111958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:09:56.130586  111958 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:09:56.130616  111958 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:09:56.130674  111958 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:09:56.167602  111958 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:09:56.167630  111958 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:09:56.167641  111958 kubeadm.go:934] updating node { 192.168.39.19 8443 v1.31.2 crio true true} ...
	I1212 00:09:56.167762  111958 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:09:56.167829  111958 ssh_runner.go:195] Run: crio config
	I1212 00:09:56.221044  111958 cni.go:84] Creating CNI manager for ""
	I1212 00:09:56.221071  111958 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 00:09:56.221085  111958 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:09:56.221117  111958 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.19 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565823 NodeName:ha-565823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:09:56.221268  111958 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565823"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.19"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.19"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:09:56.221296  111958 kube-vip.go:115] generating kube-vip config ...
	I1212 00:09:56.221341  111958 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1212 00:09:56.233595  111958 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1212 00:09:56.233702  111958 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 00:09:56.233773  111958 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:09:56.244006  111958 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:09:56.244072  111958 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 00:09:56.253838  111958 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I1212 00:09:56.270881  111958 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:09:56.287539  111958 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2286 bytes)
	I1212 00:09:56.304023  111958 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1212 00:09:56.322581  111958 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1212 00:09:56.326966  111958 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:09:56.475793  111958 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:09:56.492303  111958 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823 for IP: 192.168.39.19
	I1212 00:09:56.492357  111958 certs.go:194] generating shared ca certs ...
	I1212 00:09:56.492380  111958 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:09:56.492591  111958 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:09:56.492644  111958 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:09:56.492656  111958 certs.go:256] generating profile certs ...
	I1212 00:09:56.492766  111958 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/client.key
	I1212 00:09:56.492806  111958 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78
	I1212 00:09:56.492828  111958 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.19 192.168.39.103 192.168.39.95 192.168.39.254]
	I1212 00:09:56.738298  111958 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78 ...
	I1212 00:09:56.738330  111958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78: {Name:mk1e8e71efdd15b42075d34253ef028b61765a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:09:56.738499  111958 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78 ...
	I1212 00:09:56.738510  111958 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78: {Name:mk81dc41dced38bb672aa7ab62b58cd540312f9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:09:56.738591  111958 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt.9a933a78 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt
	I1212 00:09:56.738733  111958 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key.9a933a78 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key
	I1212 00:09:56.738858  111958 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key
	I1212 00:09:56.738875  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:09:56.738888  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:09:56.738901  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:09:56.738918  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:09:56.738931  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:09:56.738951  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:09:56.738963  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:09:56.738975  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:09:56.739028  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:09:56.739067  111958 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:09:56.739078  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:09:56.739101  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:09:56.739123  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:09:56.739143  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:09:56.739180  111958 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:09:56.739208  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:09:56.739231  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:09:56.739243  111958 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:56.739900  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:09:56.765092  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:09:56.789524  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:09:56.813936  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:09:56.837827  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 00:09:56.861502  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:09:56.900136  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:09:56.938533  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/ha-565823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:09:56.962282  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:09:56.985687  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:09:57.009046  111958 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:09:57.032175  111958 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:09:57.048754  111958 ssh_runner.go:195] Run: openssl version
	I1212 00:09:57.054878  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:09:57.065994  111958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:09:57.070501  111958 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:09:57.070557  111958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:09:57.076109  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:09:57.085834  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:09:57.097002  111958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:57.101339  111958 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:57.101385  111958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:09:57.106991  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:09:57.116929  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:09:57.128441  111958 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:09:57.133075  111958 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:09:57.133113  111958 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:09:57.138984  111958 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:09:57.148701  111958 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:09:57.153138  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:09:57.158892  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:09:57.164626  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:09:57.170141  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:09:57.175969  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:09:57.181552  111958 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:09:57.187134  111958 kubeadm.go:392] StartCluster: {Name:ha-565823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-565823 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.19 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.95 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.247 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:09:57.187280  111958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:09:57.187337  111958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:09:57.225417  111958 cri.go:89] found id: "5a35809e3509198342321446b137c2ec81b705d2d75f3d45649231a9834f9c8f"
	I1212 00:09:57.225444  111958 cri.go:89] found id: "50dabc2311179ad90a354053055125ab2c7053eeec2d9ffa191f4c933f3284c6"
	I1212 00:09:57.225450  111958 cri.go:89] found id: "8049dfebf9c9fe178ac072006401ab999e3752f4dad344eec6ced3f1c75bd004"
	I1212 00:09:57.225454  111958 cri.go:89] found id: "b4a684e30d3ad22203c06045f00cef118a261bfa08f332883d58e350c0395cc3"
	I1212 00:09:57.225457  111958 cri.go:89] found id: "999ac642455914fc7106ee846e6963858950265f3b20e267beea0d33bb96b481"
	I1212 00:09:57.225460  111958 cri.go:89] found id: "0beb663c1a28f15433da64aaa322ece1179a634d496302faaa231fb50ef4a9c3"
	I1212 00:09:57.225463  111958 cri.go:89] found id: "bfdacc6be0aeebf9f7adae9deffb700cafef2a682cab0a77b280192565aeb098"
	I1212 00:09:57.225468  111958 cri.go:89] found id: "514637eeaa81289a53c2a40b10f6b526bde1c9085500007fb555890ddbc40c57"
	I1212 00:09:57.225473  111958 cri.go:89] found id: "768be9c2541014544ec0c7e8069fdf0c3ce145898eea400073ae8b79e0761778"
	I1212 00:09:57.225482  111958 cri.go:89] found id: "452c6d19b2de997b5758578262adb5d70dca08c4bc4748b16ae6a6b0ab97e3b1"
	I1212 00:09:57.225487  111958 cri.go:89] found id: "743ae8ccc81f5a0b5a3d85146c6d453dcf412d31d4aec91a537a27c981dd589b"
	I1212 00:09:57.225490  111958 cri.go:89] found id: "4f25ff314c2e8b8fc72d18ae5a324abf68fb975dd2e04982381620635bec1e95"
	I1212 00:09:57.225492  111958 cri.go:89] found id: "b28e7b492cfe776574eca3c37d3062a30b479b5971866a25e28290de8118bef4"
	I1212 00:09:57.225495  111958 cri.go:89] found id: ""
	I1212 00:09:57.225538  111958 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565823 -n ha-565823
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-492537
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-492537
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-492537: exit status 82 (2m1.911670667s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-492537-m03"  ...
	* Stopping node "multinode-492537-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-492537" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-492537 --wait=true -v=8 --alsologtostderr
E1212 00:32:46.618192   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:32:55.700664   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-492537 --wait=true -v=8 --alsologtostderr: (3m20.532191325s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-492537
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-492537 -n multinode-492537
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 logs -n 25: (2.094999312s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3175371616/001/cp-test_multinode-492537-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537:/home/docker/cp-test_multinode-492537-m02_multinode-492537.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537 sudo cat                                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m02_multinode-492537.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03:/home/docker/cp-test_multinode-492537-m02_multinode-492537-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537-m03 sudo cat                                   | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m02_multinode-492537-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp testdata/cp-test.txt                                                | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3175371616/001/cp-test_multinode-492537-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537:/home/docker/cp-test_multinode-492537-m03_multinode-492537.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537 sudo cat                                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m03_multinode-492537.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02:/home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537-m02 sudo cat                                   | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-492537 node stop m03                                                          | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	| node    | multinode-492537 node start                                                             | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC |                     |
	| stop    | -p multinode-492537                                                                     | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC |                     |
	| start   | -p multinode-492537                                                                     | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:31 UTC | 12 Dec 24 00:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:31:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:31:51.044676  124345 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:31:51.044782  124345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:51.044790  124345 out.go:358] Setting ErrFile to fd 2...
	I1212 00:31:51.044795  124345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:51.044957  124345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:31:51.045540  124345 out.go:352] Setting JSON to false
	I1212 00:31:51.046482  124345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11653,"bootTime":1733951858,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:31:51.046572  124345 start.go:139] virtualization: kvm guest
	I1212 00:31:51.049152  124345 out.go:177] * [multinode-492537] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:31:51.050704  124345 notify.go:220] Checking for updates...
	I1212 00:31:51.050725  124345 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:31:51.052114  124345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:31:51.053646  124345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:31:51.054994  124345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:31:51.056274  124345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:31:51.057458  124345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:31:51.058982  124345 config.go:182] Loaded profile config "multinode-492537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:31:51.059076  124345 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:31:51.059515  124345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:31:51.059556  124345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:31:51.074714  124345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1212 00:31:51.075175  124345 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:31:51.075735  124345 main.go:141] libmachine: Using API Version  1
	I1212 00:31:51.075754  124345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:31:51.076128  124345 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:31:51.076330  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:31:51.110816  124345 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:31:51.112014  124345 start.go:297] selected driver: kvm2
	I1212 00:31:51.112030  124345 start.go:901] validating driver "kvm2" against &{Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:51.112202  124345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:31:51.112619  124345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:31:51.112701  124345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:31:51.127127  124345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:31:51.128148  124345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:31:51.128194  124345 cni.go:84] Creating CNI manager for ""
	I1212 00:31:51.128262  124345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1212 00:31:51.128346  124345 start.go:340] cluster config:
	{Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-492537 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:51.128544  124345 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:31:51.131189  124345 out.go:177] * Starting "multinode-492537" primary control-plane node in "multinode-492537" cluster
	I1212 00:31:51.132676  124345 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:31:51.132716  124345 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:31:51.132730  124345 cache.go:56] Caching tarball of preloaded images
	I1212 00:31:51.132794  124345 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:31:51.132806  124345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:31:51.132919  124345 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/config.json ...
	I1212 00:31:51.133098  124345 start.go:360] acquireMachinesLock for multinode-492537: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:31:51.133136  124345 start.go:364] duration metric: took 21.679µs to acquireMachinesLock for "multinode-492537"
	I1212 00:31:51.133151  124345 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:31:51.133160  124345 fix.go:54] fixHost starting: 
	I1212 00:31:51.133403  124345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:31:51.133434  124345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:31:51.147659  124345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I1212 00:31:51.148097  124345 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:31:51.148611  124345 main.go:141] libmachine: Using API Version  1
	I1212 00:31:51.148635  124345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:31:51.148971  124345 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:31:51.149147  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:31:51.149279  124345 main.go:141] libmachine: (multinode-492537) Calling .GetState
	I1212 00:31:51.150757  124345 fix.go:112] recreateIfNeeded on multinode-492537: state=Running err=<nil>
	W1212 00:31:51.150779  124345 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:31:51.153207  124345 out.go:177] * Updating the running kvm2 "multinode-492537" VM ...
	I1212 00:31:51.154454  124345 machine.go:93] provisionDockerMachine start ...
	I1212 00:31:51.154475  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:31:51.154684  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.157075  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.157470  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.157502  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.157621  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.157761  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.157870  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.157978  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.158113  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.158378  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.158396  124345 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 00:31:51.272751  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-492537
	
	I1212 00:31:51.272782  124345 main.go:141] libmachine: (multinode-492537) Calling .GetMachineName
	I1212 00:31:51.273034  124345 buildroot.go:166] provisioning hostname "multinode-492537"
	I1212 00:31:51.273068  124345 main.go:141] libmachine: (multinode-492537) Calling .GetMachineName
	I1212 00:31:51.273260  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.275899  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.276230  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.276266  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.276342  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.276513  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.276664  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.276785  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.276947  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.277157  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.277175  124345 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-492537 && echo "multinode-492537" | sudo tee /etc/hostname
	I1212 00:31:51.403928  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-492537
	
	I1212 00:31:51.403964  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.406649  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.407027  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.407056  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.407230  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.407383  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.407548  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.407687  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.407846  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.408013  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.408029  124345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-492537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-492537/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-492537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:31:51.516444  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:31:51.516479  124345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:31:51.516520  124345 buildroot.go:174] setting up certificates
	I1212 00:31:51.516530  124345 provision.go:84] configureAuth start
	I1212 00:31:51.516541  124345 main.go:141] libmachine: (multinode-492537) Calling .GetMachineName
	I1212 00:31:51.516816  124345 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:31:51.519494  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.519845  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.519866  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.520017  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.521931  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.522235  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.522270  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.522369  124345 provision.go:143] copyHostCerts
	I1212 00:31:51.522422  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:31:51.522461  124345 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:31:51.522483  124345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:31:51.522560  124345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:31:51.522691  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:31:51.522720  124345 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:31:51.522728  124345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:31:51.522768  124345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:31:51.522847  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:31:51.522870  124345 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:31:51.522877  124345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:31:51.522914  124345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:31:51.522996  124345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.multinode-492537 san=[127.0.0.1 192.168.39.208 localhost minikube multinode-492537]
	I1212 00:31:51.666722  124345 provision.go:177] copyRemoteCerts
	I1212 00:31:51.666819  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:31:51.666853  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.669491  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.669802  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.669829  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.669988  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.670161  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.670310  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.670446  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:31:51.754283  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:31:51.754367  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:31:51.779717  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:31:51.779790  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 00:31:51.805182  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:31:51.805266  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:31:51.830435  124345 provision.go:87] duration metric: took 313.887878ms to configureAuth
	I1212 00:31:51.830465  124345 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:31:51.830736  124345 config.go:182] Loaded profile config "multinode-492537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:31:51.830826  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.833574  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.833978  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.834006  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.834159  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.834326  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.834467  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.834580  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.834752  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.834970  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.834990  124345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:33:22.702659  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:33:22.702697  124345 machine.go:96] duration metric: took 1m31.548227464s to provisionDockerMachine
	I1212 00:33:22.702712  124345 start.go:293] postStartSetup for "multinode-492537" (driver="kvm2")
	I1212 00:33:22.702723  124345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:33:22.702743  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.703156  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:33:22.703202  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.706446  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.706870  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.706899  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.707072  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.707255  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.707409  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.707581  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:33:22.795827  124345 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:33:22.800388  124345 command_runner.go:130] > NAME=Buildroot
	I1212 00:33:22.800409  124345 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1212 00:33:22.800414  124345 command_runner.go:130] > ID=buildroot
	I1212 00:33:22.800418  124345 command_runner.go:130] > VERSION_ID=2023.02.9
	I1212 00:33:22.800423  124345 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1212 00:33:22.800453  124345 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:33:22.800469  124345 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:33:22.800532  124345 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:33:22.800607  124345 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:33:22.800617  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:33:22.800695  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:33:22.811091  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:33:22.835606  124345 start.go:296] duration metric: took 132.865141ms for postStartSetup
	I1212 00:33:22.835660  124345 fix.go:56] duration metric: took 1m31.702501396s for fixHost
	I1212 00:33:22.835684  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.838720  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.839129  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.839177  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.839305  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.839519  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.839696  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.839850  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.840063  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:33:22.840238  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:33:22.840248  124345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:33:22.948415  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733963602.916067935
	
	I1212 00:33:22.948444  124345 fix.go:216] guest clock: 1733963602.916067935
	I1212 00:33:22.948452  124345 fix.go:229] Guest: 2024-12-12 00:33:22.916067935 +0000 UTC Remote: 2024-12-12 00:33:22.835666506 +0000 UTC m=+91.830075377 (delta=80.401429ms)
	I1212 00:33:22.948471  124345 fix.go:200] guest clock delta is within tolerance: 80.401429ms
	I1212 00:33:22.948477  124345 start.go:83] releasing machines lock for "multinode-492537", held for 1m31.815331691s
	I1212 00:33:22.948495  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.948773  124345 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:33:22.951327  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.951762  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.951785  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.951941  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.952462  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.952631  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.952731  124345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:33:22.952782  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.952891  124345 ssh_runner.go:195] Run: cat /version.json
	I1212 00:33:22.952919  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.955472  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.955623  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.955856  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.955883  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.955932  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.955950  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.956033  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.956213  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.956224  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.956446  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.956459  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.956622  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.956651  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:33:22.956767  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:33:23.036772  124345 command_runner.go:130] > {"iso_version": "v1.34.0-1733936888-20083", "kicbase_version": "v0.0.45-1733912881-20083", "minikube_version": "v1.34.0", "commit": "c120d5e16c3cccce289808bdfc18c123105e3e3b"}
	I1212 00:33:23.037069  124345 ssh_runner.go:195] Run: systemctl --version
	I1212 00:33:23.064573  124345 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:33:23.064629  124345 command_runner.go:130] > systemd 252 (252)
	I1212 00:33:23.064669  124345 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1212 00:33:23.064732  124345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:33:23.231649  124345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:33:23.237725  124345 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:33:23.237855  124345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:33:23.237919  124345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:33:23.247775  124345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:33:23.247794  124345 start.go:495] detecting cgroup driver to use...
	I1212 00:33:23.247867  124345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:33:23.267920  124345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:33:23.282371  124345 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:33:23.282433  124345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:33:23.298269  124345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:33:23.313631  124345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:33:23.468777  124345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:33:23.608040  124345 docker.go:233] disabling docker service ...
	I1212 00:33:23.608130  124345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:33:23.626148  124345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:33:23.640288  124345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:33:23.778080  124345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:33:23.916990  124345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:33:23.931199  124345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:33:23.950997  124345 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:33:23.951041  124345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:33:23.951087  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.961760  124345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:33:23.961826  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.972328  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.982729  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.993068  124345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:33:24.009012  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:24.019688  124345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:24.031220  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:24.043581  124345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:33:24.055385  124345 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:33:24.055509  124345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:33:24.065735  124345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.213954  124345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:33:24.419201  124345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:33:24.419290  124345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:33:24.424492  124345 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:33:24.424518  124345 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:33:24.424527  124345 command_runner.go:130] > Device: 0,22	Inode: 1290        Links: 1
	I1212 00:33:24.424586  124345 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:33:24.424618  124345 command_runner.go:130] > Access: 2024-12-12 00:33:24.278233514 +0000
	I1212 00:33:24.424627  124345 command_runner.go:130] > Modify: 2024-12-12 00:33:24.278233514 +0000
	I1212 00:33:24.424632  124345 command_runner.go:130] > Change: 2024-12-12 00:33:24.278233514 +0000
	I1212 00:33:24.424636  124345 command_runner.go:130] >  Birth: -
	I1212 00:33:24.424661  124345 start.go:563] Will wait 60s for crictl version
	I1212 00:33:24.424710  124345 ssh_runner.go:195] Run: which crictl
	I1212 00:33:24.428658  124345 command_runner.go:130] > /usr/bin/crictl
	I1212 00:33:24.428716  124345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:33:24.468255  124345 command_runner.go:130] > Version:  0.1.0
	I1212 00:33:24.468286  124345 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:33:24.468292  124345 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1212 00:33:24.468479  124345 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:33:24.469749  124345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:33:24.469822  124345 ssh_runner.go:195] Run: crio --version
	I1212 00:33:24.499118  124345 command_runner.go:130] > crio version 1.29.1
	I1212 00:33:24.499143  124345 command_runner.go:130] > Version:        1.29.1
	I1212 00:33:24.499148  124345 command_runner.go:130] > GitCommit:      unknown
	I1212 00:33:24.499152  124345 command_runner.go:130] > GitCommitDate:  unknown
	I1212 00:33:24.499156  124345 command_runner.go:130] > GitTreeState:   clean
	I1212 00:33:24.499161  124345 command_runner.go:130] > BuildDate:      2024-12-11T22:36:45Z
	I1212 00:33:24.499165  124345 command_runner.go:130] > GoVersion:      go1.21.6
	I1212 00:33:24.499169  124345 command_runner.go:130] > Compiler:       gc
	I1212 00:33:24.499173  124345 command_runner.go:130] > Platform:       linux/amd64
	I1212 00:33:24.499177  124345 command_runner.go:130] > Linkmode:       dynamic
	I1212 00:33:24.499181  124345 command_runner.go:130] > BuildTags:      
	I1212 00:33:24.499187  124345 command_runner.go:130] >   containers_image_ostree_stub
	I1212 00:33:24.499194  124345 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1212 00:33:24.499202  124345 command_runner.go:130] >   btrfs_noversion
	I1212 00:33:24.499210  124345 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1212 00:33:24.499217  124345 command_runner.go:130] >   libdm_no_deferred_remove
	I1212 00:33:24.499227  124345 command_runner.go:130] >   seccomp
	I1212 00:33:24.499234  124345 command_runner.go:130] > LDFlags:          unknown
	I1212 00:33:24.499242  124345 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:33:24.499249  124345 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:33:24.499371  124345 ssh_runner.go:195] Run: crio --version
	I1212 00:33:24.528540  124345 command_runner.go:130] > crio version 1.29.1
	I1212 00:33:24.528569  124345 command_runner.go:130] > Version:        1.29.1
	I1212 00:33:24.528576  124345 command_runner.go:130] > GitCommit:      unknown
	I1212 00:33:24.528580  124345 command_runner.go:130] > GitCommitDate:  unknown
	I1212 00:33:24.528583  124345 command_runner.go:130] > GitTreeState:   clean
	I1212 00:33:24.528591  124345 command_runner.go:130] > BuildDate:      2024-12-11T22:36:45Z
	I1212 00:33:24.528597  124345 command_runner.go:130] > GoVersion:      go1.21.6
	I1212 00:33:24.528605  124345 command_runner.go:130] > Compiler:       gc
	I1212 00:33:24.528613  124345 command_runner.go:130] > Platform:       linux/amd64
	I1212 00:33:24.528624  124345 command_runner.go:130] > Linkmode:       dynamic
	I1212 00:33:24.528634  124345 command_runner.go:130] > BuildTags:      
	I1212 00:33:24.528640  124345 command_runner.go:130] >   containers_image_ostree_stub
	I1212 00:33:24.528650  124345 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1212 00:33:24.528657  124345 command_runner.go:130] >   btrfs_noversion
	I1212 00:33:24.528668  124345 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1212 00:33:24.528673  124345 command_runner.go:130] >   libdm_no_deferred_remove
	I1212 00:33:24.528678  124345 command_runner.go:130] >   seccomp
	I1212 00:33:24.528685  124345 command_runner.go:130] > LDFlags:          unknown
	I1212 00:33:24.528696  124345 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:33:24.528703  124345 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:33:24.530706  124345 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:33:24.532267  124345 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:33:24.535018  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:24.535407  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:24.535430  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:24.535662  124345 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:33:24.539920  124345 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 00:33:24.540024  124345 kubeadm.go:883] updating cluster {Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:33:24.540170  124345 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:33:24.540212  124345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.581953  124345 command_runner.go:130] > {
	I1212 00:33:24.581982  124345 command_runner.go:130] >   "images": [
	I1212 00:33:24.581987  124345 command_runner.go:130] >     {
	I1212 00:33:24.581999  124345 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1212 00:33:24.582003  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582011  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1212 00:33:24.582014  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582018  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582028  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1212 00:33:24.582039  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1212 00:33:24.582044  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582052  124345 command_runner.go:130] >       "size": "94965812",
	I1212 00:33:24.582061  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582069  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582077  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582082  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582086  124345 command_runner.go:130] >     },
	I1212 00:33:24.582089  124345 command_runner.go:130] >     {
	I1212 00:33:24.582095  124345 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1212 00:33:24.582099  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582114  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1212 00:33:24.582117  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582123  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582220  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1212 00:33:24.582236  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1212 00:33:24.582245  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582250  124345 command_runner.go:130] >       "size": "94963761",
	I1212 00:33:24.582256  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582274  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582285  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582292  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582298  124345 command_runner.go:130] >     },
	I1212 00:33:24.582309  124345 command_runner.go:130] >     {
	I1212 00:33:24.582321  124345 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1212 00:33:24.582331  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582340  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1212 00:33:24.582345  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582359  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582375  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1212 00:33:24.582390  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1212 00:33:24.582399  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582407  124345 command_runner.go:130] >       "size": "1363676",
	I1212 00:33:24.582416  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582426  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582433  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582439  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582449  124345 command_runner.go:130] >     },
	I1212 00:33:24.582458  124345 command_runner.go:130] >     {
	I1212 00:33:24.582471  124345 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 00:33:24.582480  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582490  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:33:24.582499  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582509  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582521  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 00:33:24.582542  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 00:33:24.582552  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582561  124345 command_runner.go:130] >       "size": "31470524",
	I1212 00:33:24.582571  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582580  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582590  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582599  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582606  124345 command_runner.go:130] >     },
	I1212 00:33:24.582609  124345 command_runner.go:130] >     {
	I1212 00:33:24.582620  124345 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1212 00:33:24.582630  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582644  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1212 00:33:24.582653  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582660  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582674  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1212 00:33:24.582689  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1212 00:33:24.582696  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582703  124345 command_runner.go:130] >       "size": "63273227",
	I1212 00:33:24.582712  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582720  124345 command_runner.go:130] >       "username": "nonroot",
	I1212 00:33:24.582730  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582737  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582746  124345 command_runner.go:130] >     },
	I1212 00:33:24.582751  124345 command_runner.go:130] >     {
	I1212 00:33:24.582764  124345 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1212 00:33:24.582774  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582782  124345 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1212 00:33:24.582788  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582798  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582814  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1212 00:33:24.582828  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1212 00:33:24.582837  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582847  124345 command_runner.go:130] >       "size": "149009664",
	I1212 00:33:24.582856  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.582864  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.582870  124345 command_runner.go:130] >       },
	I1212 00:33:24.582876  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582886  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582897  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582905  124345 command_runner.go:130] >     },
	I1212 00:33:24.582913  124345 command_runner.go:130] >     {
	I1212 00:33:24.582926  124345 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1212 00:33:24.582935  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582944  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1212 00:33:24.582954  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582959  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582974  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1212 00:33:24.582991  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1212 00:33:24.582999  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583006  124345 command_runner.go:130] >       "size": "95274464",
	I1212 00:33:24.583015  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583022  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.583031  124345 command_runner.go:130] >       },
	I1212 00:33:24.583038  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583044  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583050  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583059  124345 command_runner.go:130] >     },
	I1212 00:33:24.583068  124345 command_runner.go:130] >     {
	I1212 00:33:24.583092  124345 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1212 00:33:24.583102  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583110  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1212 00:33:24.583116  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583121  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583141  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1212 00:33:24.583154  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1212 00:33:24.583160  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583167  124345 command_runner.go:130] >       "size": "89474374",
	I1212 00:33:24.583173  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583180  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.583185  124345 command_runner.go:130] >       },
	I1212 00:33:24.583196  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583202  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583209  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583214  124345 command_runner.go:130] >     },
	I1212 00:33:24.583218  124345 command_runner.go:130] >     {
	I1212 00:33:24.583225  124345 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1212 00:33:24.583232  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583241  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1212 00:33:24.583250  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583257  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583273  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1212 00:33:24.583288  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1212 00:33:24.583297  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583303  124345 command_runner.go:130] >       "size": "92783513",
	I1212 00:33:24.583310  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.583317  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583327  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583334  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583343  124345 command_runner.go:130] >     },
	I1212 00:33:24.583353  124345 command_runner.go:130] >     {
	I1212 00:33:24.583366  124345 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1212 00:33:24.583371  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583378  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1212 00:33:24.583384  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583391  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583407  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1212 00:33:24.583421  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1212 00:33:24.583430  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583438  124345 command_runner.go:130] >       "size": "68457798",
	I1212 00:33:24.583447  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583455  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.583461  124345 command_runner.go:130] >       },
	I1212 00:33:24.583469  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583475  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583482  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583489  124345 command_runner.go:130] >     },
	I1212 00:33:24.583495  124345 command_runner.go:130] >     {
	I1212 00:33:24.583508  124345 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1212 00:33:24.583515  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583524  124345 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1212 00:33:24.583532  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583543  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583555  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1212 00:33:24.583570  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1212 00:33:24.583579  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583586  124345 command_runner.go:130] >       "size": "742080",
	I1212 00:33:24.583618  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583631  124345 command_runner.go:130] >         "value": "65535"
	I1212 00:33:24.583640  124345 command_runner.go:130] >       },
	I1212 00:33:24.583648  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583657  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583664  124345 command_runner.go:130] >       "pinned": true
	I1212 00:33:24.583672  124345 command_runner.go:130] >     }
	I1212 00:33:24.583679  124345 command_runner.go:130] >   ]
	I1212 00:33:24.583687  124345 command_runner.go:130] > }
	I1212 00:33:24.583983  124345 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.584006  124345 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:33:24.584081  124345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.617660  124345 command_runner.go:130] > {
	I1212 00:33:24.617682  124345 command_runner.go:130] >   "images": [
	I1212 00:33:24.617687  124345 command_runner.go:130] >     {
	I1212 00:33:24.617699  124345 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1212 00:33:24.617707  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.617716  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1212 00:33:24.617722  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617735  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.617746  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1212 00:33:24.617757  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1212 00:33:24.617761  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617765  124345 command_runner.go:130] >       "size": "94965812",
	I1212 00:33:24.617769  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.617773  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.617783  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.617792  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.617799  124345 command_runner.go:130] >     },
	I1212 00:33:24.617805  124345 command_runner.go:130] >     {
	I1212 00:33:24.617814  124345 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1212 00:33:24.617821  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.617829  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1212 00:33:24.617837  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617843  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.617855  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1212 00:33:24.617867  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1212 00:33:24.617873  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617883  124345 command_runner.go:130] >       "size": "94963761",
	I1212 00:33:24.617890  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.617905  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.617914  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.617923  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.617931  124345 command_runner.go:130] >     },
	I1212 00:33:24.617938  124345 command_runner.go:130] >     {
	I1212 00:33:24.617947  124345 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1212 00:33:24.617951  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.617961  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1212 00:33:24.617971  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617981  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.617996  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1212 00:33:24.618011  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1212 00:33:24.618025  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618035  124345 command_runner.go:130] >       "size": "1363676",
	I1212 00:33:24.618042  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.618048  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618061  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618073  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618080  124345 command_runner.go:130] >     },
	I1212 00:33:24.618086  124345 command_runner.go:130] >     {
	I1212 00:33:24.618098  124345 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 00:33:24.618109  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618116  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:33:24.618125  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618132  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618145  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 00:33:24.618170  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 00:33:24.618180  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618188  124345 command_runner.go:130] >       "size": "31470524",
	I1212 00:33:24.618198  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.618207  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618215  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618221  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618225  124345 command_runner.go:130] >     },
	I1212 00:33:24.618234  124345 command_runner.go:130] >     {
	I1212 00:33:24.618246  124345 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1212 00:33:24.618256  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618266  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1212 00:33:24.618275  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618283  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618298  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1212 00:33:24.618310  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1212 00:33:24.618319  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618327  124345 command_runner.go:130] >       "size": "63273227",
	I1212 00:33:24.618344  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.618363  124345 command_runner.go:130] >       "username": "nonroot",
	I1212 00:33:24.618373  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618382  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618390  124345 command_runner.go:130] >     },
	I1212 00:33:24.618399  124345 command_runner.go:130] >     {
	I1212 00:33:24.618412  124345 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1212 00:33:24.618423  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618434  124345 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1212 00:33:24.618443  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618460  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618477  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1212 00:33:24.618484  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1212 00:33:24.618490  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618496  124345 command_runner.go:130] >       "size": "149009664",
	I1212 00:33:24.618502  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.618509  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.618522  124345 command_runner.go:130] >       },
	I1212 00:33:24.618528  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618535  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618544  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618552  124345 command_runner.go:130] >     },
	I1212 00:33:24.618557  124345 command_runner.go:130] >     {
	I1212 00:33:24.618568  124345 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1212 00:33:24.618576  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618608  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1212 00:33:24.618621  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618628  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618640  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1212 00:33:24.618655  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1212 00:33:24.618662  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618669  124345 command_runner.go:130] >       "size": "95274464",
	I1212 00:33:24.618679  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.618689  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.618705  124345 command_runner.go:130] >       },
	I1212 00:33:24.618715  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618724  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618734  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618743  124345 command_runner.go:130] >     },
	I1212 00:33:24.618749  124345 command_runner.go:130] >     {
	I1212 00:33:24.618760  124345 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1212 00:33:24.618770  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618783  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1212 00:33:24.618792  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618802  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618834  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1212 00:33:24.618849  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1212 00:33:24.618856  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618866  124345 command_runner.go:130] >       "size": "89474374",
	I1212 00:33:24.618876  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.618885  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.618894  124345 command_runner.go:130] >       },
	I1212 00:33:24.618904  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618913  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618922  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618926  124345 command_runner.go:130] >     },
	I1212 00:33:24.618930  124345 command_runner.go:130] >     {
	I1212 00:33:24.618943  124345 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1212 00:33:24.618950  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618962  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1212 00:33:24.618970  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618977  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618992  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1212 00:33:24.619008  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1212 00:33:24.619014  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619020  124345 command_runner.go:130] >       "size": "92783513",
	I1212 00:33:24.619029  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.619046  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.619055  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.619061  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.619069  124345 command_runner.go:130] >     },
	I1212 00:33:24.619078  124345 command_runner.go:130] >     {
	I1212 00:33:24.619088  124345 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1212 00:33:24.619097  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.619106  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1212 00:33:24.619115  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619125  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.619137  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1212 00:33:24.619152  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1212 00:33:24.619162  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619171  124345 command_runner.go:130] >       "size": "68457798",
	I1212 00:33:24.619180  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.619186  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.619194  124345 command_runner.go:130] >       },
	I1212 00:33:24.619204  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.619214  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.619222  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.619231  124345 command_runner.go:130] >     },
	I1212 00:33:24.619239  124345 command_runner.go:130] >     {
	I1212 00:33:24.619252  124345 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1212 00:33:24.619261  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.619272  124345 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1212 00:33:24.619278  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619285  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.619300  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1212 00:33:24.619314  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1212 00:33:24.619323  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619338  124345 command_runner.go:130] >       "size": "742080",
	I1212 00:33:24.619348  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.619357  124345 command_runner.go:130] >         "value": "65535"
	I1212 00:33:24.619371  124345 command_runner.go:130] >       },
	I1212 00:33:24.619381  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.619388  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.619398  124345 command_runner.go:130] >       "pinned": true
	I1212 00:33:24.619406  124345 command_runner.go:130] >     }
	I1212 00:33:24.619415  124345 command_runner.go:130] >   ]
	I1212 00:33:24.619423  124345 command_runner.go:130] > }
	I1212 00:33:24.619620  124345 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.619637  124345 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:33:24.619647  124345 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1212 00:33:24.619768  124345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-492537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:24.619854  124345 ssh_runner.go:195] Run: crio config
	I1212 00:33:24.654076  124345 command_runner.go:130] ! time="2024-12-12 00:33:24.621740261Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1212 00:33:24.659298  124345 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:33:24.673071  124345 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:33:24.673099  124345 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:33:24.673106  124345 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:33:24.673109  124345 command_runner.go:130] > #
	I1212 00:33:24.673117  124345 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:33:24.673122  124345 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:33:24.673128  124345 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:33:24.673141  124345 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:33:24.673145  124345 command_runner.go:130] > # reload'.
	I1212 00:33:24.673151  124345 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:33:24.673157  124345 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:33:24.673164  124345 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:33:24.673170  124345 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:33:24.673174  124345 command_runner.go:130] > [crio]
	I1212 00:33:24.673180  124345 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:33:24.673185  124345 command_runner.go:130] > # containers images, in this directory.
	I1212 00:33:24.673189  124345 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 00:33:24.673200  124345 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:33:24.673207  124345 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 00:33:24.673214  124345 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:33:24.673220  124345 command_runner.go:130] > # imagestore = ""
	I1212 00:33:24.673226  124345 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:33:24.673237  124345 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:33:24.673241  124345 command_runner.go:130] > storage_driver = "overlay"
	I1212 00:33:24.673246  124345 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:33:24.673254  124345 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:33:24.673258  124345 command_runner.go:130] > storage_option = [
	I1212 00:33:24.673263  124345 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 00:33:24.673273  124345 command_runner.go:130] > ]
	I1212 00:33:24.673282  124345 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:33:24.673288  124345 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:33:24.673295  124345 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:33:24.673300  124345 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:33:24.673308  124345 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:33:24.673312  124345 command_runner.go:130] > # always happen on a node reboot
	I1212 00:33:24.673317  124345 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:33:24.673329  124345 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:33:24.673337  124345 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:33:24.673343  124345 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:33:24.673348  124345 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1212 00:33:24.673355  124345 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:33:24.673365  124345 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:33:24.673369  124345 command_runner.go:130] > # internal_wipe = true
	I1212 00:33:24.673376  124345 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:33:24.673386  124345 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:33:24.673390  124345 command_runner.go:130] > # internal_repair = false
	I1212 00:33:24.673395  124345 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:33:24.673403  124345 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:33:24.673409  124345 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:33:24.673415  124345 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:33:24.673421  124345 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:33:24.673427  124345 command_runner.go:130] > [crio.api]
	I1212 00:33:24.673432  124345 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:33:24.673437  124345 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:33:24.673442  124345 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:33:24.673452  124345 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:33:24.673458  124345 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:33:24.673466  124345 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:33:24.673469  124345 command_runner.go:130] > # stream_port = "0"
	I1212 00:33:24.673474  124345 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:33:24.673481  124345 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:33:24.673487  124345 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:33:24.673492  124345 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:33:24.673500  124345 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:33:24.673508  124345 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 00:33:24.673512  124345 command_runner.go:130] > # minutes.
	I1212 00:33:24.673519  124345 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:33:24.673524  124345 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:33:24.673533  124345 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 00:33:24.673537  124345 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:33:24.673545  124345 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:33:24.673552  124345 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:33:24.673572  124345 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 00:33:24.673579  124345 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:33:24.673586  124345 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:33:24.673590  124345 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 00:33:24.673597  124345 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:33:24.673604  124345 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 00:33:24.673609  124345 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:33:24.673617  124345 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:33:24.673621  124345 command_runner.go:130] > [crio.runtime]
	I1212 00:33:24.673627  124345 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:33:24.673634  124345 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:33:24.673639  124345 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:33:24.673647  124345 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:33:24.673650  124345 command_runner.go:130] > # default_ulimits = [
	I1212 00:33:24.673653  124345 command_runner.go:130] > # ]
	I1212 00:33:24.673659  124345 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:33:24.673668  124345 command_runner.go:130] > # no_pivot = false
	I1212 00:33:24.673675  124345 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:33:24.673681  124345 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:33:24.673688  124345 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:33:24.673693  124345 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:33:24.673700  124345 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:33:24.673708  124345 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:33:24.673716  124345 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 00:33:24.673721  124345 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:33:24.673728  124345 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:33:24.673734  124345 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:33:24.673740  124345 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:33:24.673745  124345 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:33:24.673755  124345 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:33:24.673761  124345 command_runner.go:130] > conmon_env = [
	I1212 00:33:24.673766  124345 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 00:33:24.673770  124345 command_runner.go:130] > ]
	I1212 00:33:24.673775  124345 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:33:24.673781  124345 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:33:24.673787  124345 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:33:24.673794  124345 command_runner.go:130] > # default_env = [
	I1212 00:33:24.673798  124345 command_runner.go:130] > # ]
	I1212 00:33:24.673805  124345 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:33:24.673812  124345 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:33:24.673817  124345 command_runner.go:130] > # selinux = false
	I1212 00:33:24.673823  124345 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:33:24.673832  124345 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 00:33:24.673837  124345 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 00:33:24.673843  124345 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:33:24.673849  124345 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 00:33:24.673855  124345 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 00:33:24.673862  124345 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 00:33:24.673866  124345 command_runner.go:130] > # which might increase security.
	I1212 00:33:24.673871  124345 command_runner.go:130] > # This option is currently deprecated,
	I1212 00:33:24.673879  124345 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1212 00:33:24.673883  124345 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 00:33:24.673891  124345 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:33:24.673897  124345 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:33:24.673905  124345 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:33:24.673911  124345 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:33:24.673918  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.673922  124345 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:33:24.673931  124345 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:33:24.673935  124345 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:33:24.673942  124345 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:33:24.673948  124345 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:33:24.673952  124345 command_runner.go:130] > # blockio parameters.
	I1212 00:33:24.673956  124345 command_runner.go:130] > # blockio_reload = false
	I1212 00:33:24.673962  124345 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:33:24.673968  124345 command_runner.go:130] > # irqbalance daemon.
	I1212 00:33:24.673972  124345 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:33:24.673981  124345 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:33:24.673989  124345 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:33:24.673996  124345 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:33:24.674003  124345 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:33:24.674010  124345 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:33:24.674018  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.674024  124345 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:33:24.674033  124345 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:33:24.674038  124345 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:33:24.674078  124345 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:33:24.674088  124345 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:33:24.674094  124345 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:33:24.674103  124345 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:33:24.674107  124345 command_runner.go:130] > # will be added.
	I1212 00:33:24.674111  124345 command_runner.go:130] > # default_capabilities = [
	I1212 00:33:24.674116  124345 command_runner.go:130] > # 	"CHOWN",
	I1212 00:33:24.674121  124345 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:33:24.674125  124345 command_runner.go:130] > # 	"FSETID",
	I1212 00:33:24.674131  124345 command_runner.go:130] > # 	"FOWNER",
	I1212 00:33:24.674134  124345 command_runner.go:130] > # 	"SETGID",
	I1212 00:33:24.674137  124345 command_runner.go:130] > # 	"SETUID",
	I1212 00:33:24.674143  124345 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:33:24.674146  124345 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:33:24.674150  124345 command_runner.go:130] > # 	"KILL",
	I1212 00:33:24.674153  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674161  124345 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:33:24.674169  124345 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:33:24.674174  124345 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:33:24.674182  124345 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:33:24.674188  124345 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:33:24.674193  124345 command_runner.go:130] > default_sysctls = [
	I1212 00:33:24.674198  124345 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:33:24.674204  124345 command_runner.go:130] > ]
	I1212 00:33:24.674209  124345 command_runner.go:130] > # List of devices on the host that a
	I1212 00:33:24.674217  124345 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:33:24.674221  124345 command_runner.go:130] > # allowed_devices = [
	I1212 00:33:24.674225  124345 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:33:24.674228  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674233  124345 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:33:24.674242  124345 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:33:24.674247  124345 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:33:24.674263  124345 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:33:24.674276  124345 command_runner.go:130] > # additional_devices = [
	I1212 00:33:24.674279  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674284  124345 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:33:24.674290  124345 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:33:24.674294  124345 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:33:24.674300  124345 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:33:24.674304  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674311  124345 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:33:24.674318  124345 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:33:24.674323  124345 command_runner.go:130] > # Defaults to false.
	I1212 00:33:24.674330  124345 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:33:24.674336  124345 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:33:24.674344  124345 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:33:24.674348  124345 command_runner.go:130] > # hooks_dir = [
	I1212 00:33:24.674352  124345 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:33:24.674358  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674363  124345 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:33:24.674372  124345 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:33:24.674377  124345 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:33:24.674380  124345 command_runner.go:130] > #
	I1212 00:33:24.674385  124345 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:33:24.674393  124345 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:33:24.674398  124345 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:33:24.674402  124345 command_runner.go:130] > #
	I1212 00:33:24.674408  124345 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:33:24.674417  124345 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:33:24.674423  124345 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:33:24.674430  124345 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:33:24.674433  124345 command_runner.go:130] > #
	I1212 00:33:24.674439  124345 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:33:24.674444  124345 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:33:24.674453  124345 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:33:24.674457  124345 command_runner.go:130] > pids_limit = 1024
	I1212 00:33:24.674465  124345 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:33:24.674471  124345 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:33:24.674477  124345 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:33:24.674485  124345 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:33:24.674490  124345 command_runner.go:130] > # log_size_max = -1
	I1212 00:33:24.674499  124345 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:33:24.674507  124345 command_runner.go:130] > # log_to_journald = false
	I1212 00:33:24.674513  124345 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:33:24.674520  124345 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:33:24.674525  124345 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:33:24.674530  124345 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:33:24.674535  124345 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:33:24.674541  124345 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:33:24.674546  124345 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:33:24.674552  124345 command_runner.go:130] > # read_only = false
	I1212 00:33:24.674558  124345 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:33:24.674566  124345 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:33:24.674570  124345 command_runner.go:130] > # live configuration reload.
	I1212 00:33:24.674576  124345 command_runner.go:130] > # log_level = "info"
	I1212 00:33:24.674582  124345 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:33:24.674588  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.674591  124345 command_runner.go:130] > # log_filter = ""
	I1212 00:33:24.674597  124345 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:33:24.674607  124345 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:33:24.674613  124345 command_runner.go:130] > # separated by comma.
	I1212 00:33:24.674621  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674627  124345 command_runner.go:130] > # uid_mappings = ""
	I1212 00:33:24.674632  124345 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:33:24.674638  124345 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:33:24.674644  124345 command_runner.go:130] > # separated by comma.
	I1212 00:33:24.674651  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674657  124345 command_runner.go:130] > # gid_mappings = ""
	I1212 00:33:24.674663  124345 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:33:24.674671  124345 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:33:24.674677  124345 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:33:24.674687  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674691  124345 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:33:24.674697  124345 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:33:24.674704  124345 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:33:24.674713  124345 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:33:24.674721  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674729  124345 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:33:24.674735  124345 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:33:24.674740  124345 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:33:24.674748  124345 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:33:24.674752  124345 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:33:24.674758  124345 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:33:24.674766  124345 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:33:24.674771  124345 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:33:24.674778  124345 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:33:24.674781  124345 command_runner.go:130] > drop_infra_ctr = false
	I1212 00:33:24.674787  124345 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:33:24.674795  124345 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:33:24.674802  124345 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:33:24.674808  124345 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:33:24.674815  124345 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:33:24.674823  124345 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:33:24.674828  124345 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:33:24.674835  124345 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:33:24.674840  124345 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:33:24.674847  124345 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:33:24.674852  124345 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:33:24.674858  124345 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:33:24.674864  124345 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:33:24.674868  124345 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 00:33:24.674874  124345 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:33:24.674882  124345 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:33:24.674886  124345 command_runner.go:130] > # enable_criu_support = false
	I1212 00:33:24.674893  124345 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:33:24.674899  124345 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:33:24.674906  124345 command_runner.go:130] > # enable_pod_events = false
	I1212 00:33:24.674911  124345 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:33:24.674919  124345 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:33:24.674924  124345 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:33:24.674930  124345 command_runner.go:130] > # default_runtime = "runc"
	I1212 00:33:24.674934  124345 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:33:24.674941  124345 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:33:24.674952  124345 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:33:24.674961  124345 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:33:24.674971  124345 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:33:24.674977  124345 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:33:24.674982  124345 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:33:24.674988  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674994  124345 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:33:24.675002  124345 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:33:24.675009  124345 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:33:24.675017  124345 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:33:24.675021  124345 command_runner.go:130] > #
	I1212 00:33:24.675025  124345 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:33:24.675032  124345 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:33:24.675053  124345 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:33:24.675060  124345 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:33:24.675064  124345 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:33:24.675070  124345 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:33:24.675075  124345 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:33:24.675085  124345 command_runner.go:130] > # monitor_env = []
	I1212 00:33:24.675090  124345 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:33:24.675096  124345 command_runner.go:130] > # allowed_annotations = []
	I1212 00:33:24.675101  124345 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:33:24.675107  124345 command_runner.go:130] > # Where:
	I1212 00:33:24.675112  124345 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:33:24.675118  124345 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:33:24.675126  124345 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:33:24.675132  124345 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:33:24.675138  124345 command_runner.go:130] > #   in $PATH.
	I1212 00:33:24.675145  124345 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:33:24.675150  124345 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:33:24.675156  124345 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:33:24.675164  124345 command_runner.go:130] > #   state.
	I1212 00:33:24.675173  124345 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:33:24.675183  124345 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:33:24.675192  124345 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:33:24.675198  124345 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:33:24.675206  124345 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:33:24.675216  124345 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:33:24.675230  124345 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:33:24.675241  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:33:24.675255  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:33:24.675272  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:33:24.675286  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:33:24.675299  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:33:24.675312  124345 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:33:24.675323  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:33:24.675334  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:33:24.675349  124345 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:33:24.675363  124345 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:33:24.675373  124345 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:33:24.675385  124345 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:33:24.675396  124345 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:33:24.675410  124345 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:33:24.675422  124345 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:33:24.675437  124345 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1212 00:33:24.675449  124345 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:33:24.675463  124345 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:33:24.675477  124345 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:33:24.675483  124345 command_runner.go:130] > #
	I1212 00:33:24.675494  124345 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:33:24.675503  124345 command_runner.go:130] > #
	I1212 00:33:24.675515  124345 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:33:24.675530  124345 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:33:24.675539  124345 command_runner.go:130] > #
	I1212 00:33:24.675550  124345 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:33:24.675564  124345 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:33:24.675572  124345 command_runner.go:130] > #
	I1212 00:33:24.675583  124345 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:33:24.675603  124345 command_runner.go:130] > # feature.
	I1212 00:33:24.675609  124345 command_runner.go:130] > #
	I1212 00:33:24.675621  124345 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:33:24.675634  124345 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:33:24.675648  124345 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:33:24.675665  124345 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:33:24.675679  124345 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:33:24.675687  124345 command_runner.go:130] > #
	I1212 00:33:24.675698  124345 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:33:24.675711  124345 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:33:24.675717  124345 command_runner.go:130] > #
	I1212 00:33:24.675731  124345 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:33:24.675744  124345 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:33:24.675752  124345 command_runner.go:130] > #
	I1212 00:33:24.675762  124345 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:33:24.675776  124345 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:33:24.675785  124345 command_runner.go:130] > # limitation.
	I1212 00:33:24.675798  124345 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:33:24.675807  124345 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 00:33:24.675814  124345 command_runner.go:130] > runtime_type = "oci"
	I1212 00:33:24.675822  124345 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:33:24.675832  124345 command_runner.go:130] > runtime_config_path = ""
	I1212 00:33:24.675841  124345 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:33:24.675851  124345 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:33:24.675859  124345 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:33:24.675866  124345 command_runner.go:130] > monitor_env = [
	I1212 00:33:24.675878  124345 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 00:33:24.675887  124345 command_runner.go:130] > ]
	I1212 00:33:24.675896  124345 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:33:24.675910  124345 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:33:24.675922  124345 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:33:24.675936  124345 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:33:24.675952  124345 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:33:24.675969  124345 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 00:33:24.675981  124345 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:33:24.675997  124345 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:33:24.676012  124345 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:33:24.676022  124345 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:33:24.676032  124345 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:33:24.676037  124345 command_runner.go:130] > # Example:
	I1212 00:33:24.676043  124345 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:33:24.676049  124345 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:33:24.676059  124345 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:33:24.676066  124345 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:33:24.676073  124345 command_runner.go:130] > # cpuset = 0
	I1212 00:33:24.676081  124345 command_runner.go:130] > # cpushares = "0-1"
	I1212 00:33:24.676088  124345 command_runner.go:130] > # Where:
	I1212 00:33:24.676098  124345 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:33:24.676109  124345 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:33:24.676118  124345 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:33:24.676128  124345 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:33:24.676141  124345 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:33:24.676154  124345 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:33:24.676165  124345 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:33:24.676179  124345 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:33:24.676190  124345 command_runner.go:130] > # Default value is set to true
	I1212 00:33:24.676201  124345 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:33:24.676213  124345 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:33:24.676225  124345 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:33:24.676238  124345 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:33:24.676248  124345 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:33:24.676259  124345 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:33:24.676275  124345 command_runner.go:130] > #
	I1212 00:33:24.676289  124345 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:33:24.676302  124345 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 00:33:24.676316  124345 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 00:33:24.676330  124345 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 00:33:24.676343  124345 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 00:33:24.676349  124345 command_runner.go:130] > [crio.image]
	I1212 00:33:24.676360  124345 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:33:24.676371  124345 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:33:24.676385  124345 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:33:24.676399  124345 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:33:24.676409  124345 command_runner.go:130] > # global_auth_file = ""
	I1212 00:33:24.676418  124345 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:33:24.676428  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.676437  124345 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1212 00:33:24.676451  124345 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:33:24.676466  124345 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:33:24.676478  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.676493  124345 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:33:24.676508  124345 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:33:24.676521  124345 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:33:24.676532  124345 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:33:24.676545  124345 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:33:24.676556  124345 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:33:24.676569  124345 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:33:24.676582  124345 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:33:24.676596  124345 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:33:24.676612  124345 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:33:24.676626  124345 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:33:24.676639  124345 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:33:24.676651  124345 command_runner.go:130] > # pinned_images = [
	I1212 00:33:24.676661  124345 command_runner.go:130] > # ]
	I1212 00:33:24.676675  124345 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:33:24.676688  124345 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:33:24.676701  124345 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:33:24.676712  124345 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:33:24.676724  124345 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:33:24.676735  124345 command_runner.go:130] > # signature_policy = ""
	I1212 00:33:24.676747  124345 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:33:24.676762  124345 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:33:24.676776  124345 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:33:24.676790  124345 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:33:24.676800  124345 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:33:24.676819  124345 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:33:24.676833  124345 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:33:24.676847  124345 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:33:24.676857  124345 command_runner.go:130] > # changing them here.
	I1212 00:33:24.676866  124345 command_runner.go:130] > # insecure_registries = [
	I1212 00:33:24.676876  124345 command_runner.go:130] > # ]
	I1212 00:33:24.676888  124345 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:33:24.676900  124345 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:33:24.676910  124345 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:33:24.676921  124345 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:33:24.676930  124345 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:33:24.676946  124345 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:33:24.676955  124345 command_runner.go:130] > # CNI plugins.
	I1212 00:33:24.676962  124345 command_runner.go:130] > [crio.network]
	I1212 00:33:24.676976  124345 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:33:24.676990  124345 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:33:24.677000  124345 command_runner.go:130] > # cni_default_network = ""
	I1212 00:33:24.677013  124345 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:33:24.677024  124345 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:33:24.677035  124345 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:33:24.677046  124345 command_runner.go:130] > # plugin_dirs = [
	I1212 00:33:24.677056  124345 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:33:24.677064  124345 command_runner.go:130] > # ]
	I1212 00:33:24.677074  124345 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:33:24.677083  124345 command_runner.go:130] > [crio.metrics]
	I1212 00:33:24.677092  124345 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:33:24.677102  124345 command_runner.go:130] > enable_metrics = true
	I1212 00:33:24.677110  124345 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:33:24.677121  124345 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:33:24.677132  124345 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:33:24.677144  124345 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:33:24.677156  124345 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:33:24.677167  124345 command_runner.go:130] > # metrics_collectors = [
	I1212 00:33:24.677175  124345 command_runner.go:130] > # 	"operations",
	I1212 00:33:24.677187  124345 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 00:33:24.677198  124345 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 00:33:24.677208  124345 command_runner.go:130] > # 	"operations_errors",
	I1212 00:33:24.677216  124345 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 00:33:24.677225  124345 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 00:33:24.677234  124345 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 00:33:24.677246  124345 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 00:33:24.677257  124345 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 00:33:24.677274  124345 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:33:24.677284  124345 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 00:33:24.677293  124345 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:33:24.677302  124345 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:33:24.677309  124345 command_runner.go:130] > # 	"containers_oom",
	I1212 00:33:24.677319  124345 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:33:24.677326  124345 command_runner.go:130] > # 	"operations_total",
	I1212 00:33:24.677337  124345 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:33:24.677347  124345 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:33:24.677358  124345 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:33:24.677367  124345 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:33:24.677377  124345 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:33:24.677387  124345 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:33:24.677396  124345 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:33:24.677410  124345 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:33:24.677421  124345 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:33:24.677433  124345 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:33:24.677444  124345 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:33:24.677452  124345 command_runner.go:130] > # ]
	I1212 00:33:24.677461  124345 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:33:24.677469  124345 command_runner.go:130] > # metrics_port = 9090
	I1212 00:33:24.677479  124345 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:33:24.677488  124345 command_runner.go:130] > # metrics_socket = ""
	I1212 00:33:24.677498  124345 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:33:24.677512  124345 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:33:24.677526  124345 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:33:24.677537  124345 command_runner.go:130] > # certificate on any modification event.
	I1212 00:33:24.677545  124345 command_runner.go:130] > # metrics_cert = ""
	I1212 00:33:24.677555  124345 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:33:24.677566  124345 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:33:24.677574  124345 command_runner.go:130] > # metrics_key = ""
	I1212 00:33:24.677585  124345 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:33:24.677594  124345 command_runner.go:130] > [crio.tracing]
	I1212 00:33:24.677605  124345 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:33:24.677615  124345 command_runner.go:130] > # enable_tracing = false
	I1212 00:33:24.677628  124345 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:33:24.677637  124345 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 00:33:24.677649  124345 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:33:24.677661  124345 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:33:24.677671  124345 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:33:24.677679  124345 command_runner.go:130] > [crio.nri]
	I1212 00:33:24.677690  124345 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:33:24.677698  124345 command_runner.go:130] > # enable_nri = false
	I1212 00:33:24.677706  124345 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:33:24.677718  124345 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:33:24.677728  124345 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:33:24.677736  124345 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:33:24.677748  124345 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:33:24.677760  124345 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:33:24.677773  124345 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:33:24.677784  124345 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:33:24.677794  124345 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:33:24.677804  124345 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:33:24.677816  124345 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:33:24.677825  124345 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:33:24.677837  124345 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:33:24.677846  124345 command_runner.go:130] > [crio.stats]
	I1212 00:33:24.677860  124345 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:33:24.677872  124345 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:33:24.677883  124345 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:33:24.678002  124345 cni.go:84] Creating CNI manager for ""
	I1212 00:33:24.678014  124345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1212 00:33:24.678026  124345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:24.678058  124345 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-492537 NodeName:multinode-492537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:24.678230  124345 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-492537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:24.678320  124345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:33:24.689740  124345 command_runner.go:130] > kubeadm
	I1212 00:33:24.689760  124345 command_runner.go:130] > kubectl
	I1212 00:33:24.689766  124345 command_runner.go:130] > kubelet
	I1212 00:33:24.689832  124345 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:33:24.689887  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:24.699709  124345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1212 00:33:24.717308  124345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:33:24.734383  124345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1212 00:33:24.751534  124345 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:24.755436  124345 command_runner.go:130] > 192.168.39.208	control-plane.minikube.internal
	I1212 00:33:24.755513  124345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.902042  124345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:24.918633  124345 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537 for IP: 192.168.39.208
	I1212 00:33:24.918663  124345 certs.go:194] generating shared ca certs ...
	I1212 00:33:24.918692  124345 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.918876  124345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:33:24.918939  124345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:33:24.918953  124345 certs.go:256] generating profile certs ...
	I1212 00:33:24.919093  124345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/client.key
	I1212 00:33:24.919176  124345 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.key.ca4dfcaa
	I1212 00:33:24.919213  124345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.key
	I1212 00:33:24.919225  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:33:24.919237  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:33:24.919248  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:33:24.919258  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:33:24.919270  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:33:24.919280  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:33:24.919292  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:33:24.919308  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:33:24.919365  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:33:24.919394  124345 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:24.919406  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:24.919468  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:33:24.919496  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:24.919522  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:33:24.919563  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:33:24.919588  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:33:24.919630  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:33:24.919646  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.920273  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:24.944679  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:24.968838  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:24.992775  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:33:25.017458  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:33:25.041689  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:25.065746  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:25.089699  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:33:25.113283  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:33:25.137363  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:33:25.161217  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:25.187064  124345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:25.204830  124345 ssh_runner.go:195] Run: openssl version
	I1212 00:33:25.210851  124345 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1212 00:33:25.210937  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:33:25.222030  124345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.226519  124345 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.226581  124345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.226632  124345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.232467  124345 command_runner.go:130] > 51391683
	I1212 00:33:25.232528  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:33:25.242888  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:33:25.253813  124345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.258641  124345 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.258674  124345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.258719  124345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.264449  124345 command_runner.go:130] > 3ec20f2e
	I1212 00:33:25.264507  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:33:25.273973  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:33:25.285194  124345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.289653  124345 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.289745  124345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.289793  124345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.295197  124345 command_runner.go:130] > b5213941
	I1212 00:33:25.295454  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:33:25.304934  124345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.309763  124345 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.309789  124345 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:33:25.309797  124345 command_runner.go:130] > Device: 253,1	Inode: 5244462     Links: 1
	I1212 00:33:25.309803  124345 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:33:25.309809  124345 command_runner.go:130] > Access: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309816  124345 command_runner.go:130] > Modify: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309821  124345 command_runner.go:130] > Change: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309826  124345 command_runner.go:130] >  Birth: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309864  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:33:25.315607  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.315673  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:33:25.321224  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.321427  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:33:25.327267  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.327329  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:33:25.333074  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.333134  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:33:25.338993  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.339040  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:33:25.344547  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.344709  124345 kubeadm.go:392] StartCluster: {Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:25.344838  124345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:25.344892  124345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:25.384164  124345 command_runner.go:130] > 6c36c43dca7710da2daac93c4db2b9fe66d56935018b5cfb719223ca69bfeceb
	I1212 00:33:25.384188  124345 command_runner.go:130] > 15858f7c6c1998582e3c864d38164bdc97c02cc7c5821a71997397e8517d8996
	I1212 00:33:25.384195  124345 command_runner.go:130] > 76d1bbad8679a30947a98033746896e9d29f70b0eebab4f3ba9847677d057322
	I1212 00:33:25.384201  124345 command_runner.go:130] > a256f99dfeb012a82928d5b602a902e808285131c2171f286bcafe4fd2e24393
	I1212 00:33:25.384206  124345 command_runner.go:130] > 1bcb1a5c48edaeda78c1d27f17cce1b209165b9af22bc7735d1657078fb0f1cc
	I1212 00:33:25.384211  124345 command_runner.go:130] > 4e22f073589e4167bb82c8d86d415e9c1ed9d121f86471cbde61732a2b45d146
	I1212 00:33:25.384217  124345 command_runner.go:130] > 02c9588db3283f504267742f31da7c57cb5950e15720f4243bf286f0cd58e583
	I1212 00:33:25.384233  124345 command_runner.go:130] > dd846a91091143c5ca25f344cb9f2fa60b447f24daca84e2adb65c98007ca3c3
	I1212 00:33:25.386343  124345 cri.go:89] found id: "6c36c43dca7710da2daac93c4db2b9fe66d56935018b5cfb719223ca69bfeceb"
	I1212 00:33:25.386363  124345 cri.go:89] found id: "15858f7c6c1998582e3c864d38164bdc97c02cc7c5821a71997397e8517d8996"
	I1212 00:33:25.386366  124345 cri.go:89] found id: "76d1bbad8679a30947a98033746896e9d29f70b0eebab4f3ba9847677d057322"
	I1212 00:33:25.386369  124345 cri.go:89] found id: "a256f99dfeb012a82928d5b602a902e808285131c2171f286bcafe4fd2e24393"
	I1212 00:33:25.386372  124345 cri.go:89] found id: "1bcb1a5c48edaeda78c1d27f17cce1b209165b9af22bc7735d1657078fb0f1cc"
	I1212 00:33:25.386375  124345 cri.go:89] found id: "4e22f073589e4167bb82c8d86d415e9c1ed9d121f86471cbde61732a2b45d146"
	I1212 00:33:25.386379  124345 cri.go:89] found id: "02c9588db3283f504267742f31da7c57cb5950e15720f4243bf286f0cd58e583"
	I1212 00:33:25.386382  124345 cri.go:89] found id: "dd846a91091143c5ca25f344cb9f2fa60b447f24daca84e2adb65c98007ca3c3"
	I1212 00:33:25.386384  124345 cri.go:89] found id: ""
	I1212 00:33:25.386431  124345 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-492537 -n multinode-492537
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-492537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 stop
E1212 00:35:58.767092   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-492537 stop: exit status 82 (2m0.474805933s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-492537-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-492537 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status
E1212 00:37:29.694437   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 status: (18.691202545s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr: (3.359944141s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-492537 -n multinode-492537
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 logs -n 25: (2.011751016s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537:/home/docker/cp-test_multinode-492537-m02_multinode-492537.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537 sudo cat                                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m02_multinode-492537.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03:/home/docker/cp-test_multinode-492537-m02_multinode-492537-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537-m03 sudo cat                                   | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m02_multinode-492537-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp testdata/cp-test.txt                                                | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3175371616/001/cp-test_multinode-492537-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537:/home/docker/cp-test_multinode-492537-m03_multinode-492537.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537 sudo cat                                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m03_multinode-492537.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02:/home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537-m02 sudo cat                                   | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-492537 node stop m03                                                          | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	| node    | multinode-492537 node start                                                             | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC |                     |
	| stop    | -p multinode-492537                                                                     | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC |                     |
	| start   | -p multinode-492537                                                                     | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:31 UTC | 12 Dec 24 00:35 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC |                     |
	| node    | multinode-492537 node delete                                                            | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC | 12 Dec 24 00:35 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-492537 stop                                                                   | multinode-492537 | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:31:51
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:31:51.044676  124345 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:31:51.044782  124345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:51.044790  124345 out.go:358] Setting ErrFile to fd 2...
	I1212 00:31:51.044795  124345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:31:51.044957  124345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:31:51.045540  124345 out.go:352] Setting JSON to false
	I1212 00:31:51.046482  124345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11653,"bootTime":1733951858,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:31:51.046572  124345 start.go:139] virtualization: kvm guest
	I1212 00:31:51.049152  124345 out.go:177] * [multinode-492537] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:31:51.050704  124345 notify.go:220] Checking for updates...
	I1212 00:31:51.050725  124345 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:31:51.052114  124345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:31:51.053646  124345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:31:51.054994  124345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:31:51.056274  124345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:31:51.057458  124345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:31:51.058982  124345 config.go:182] Loaded profile config "multinode-492537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:31:51.059076  124345 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:31:51.059515  124345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:31:51.059556  124345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:31:51.074714  124345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I1212 00:31:51.075175  124345 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:31:51.075735  124345 main.go:141] libmachine: Using API Version  1
	I1212 00:31:51.075754  124345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:31:51.076128  124345 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:31:51.076330  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:31:51.110816  124345 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:31:51.112014  124345 start.go:297] selected driver: kvm2
	I1212 00:31:51.112030  124345 start.go:901] validating driver "kvm2" against &{Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:51.112202  124345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:31:51.112619  124345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:31:51.112701  124345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:31:51.127127  124345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:31:51.128148  124345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:31:51.128194  124345 cni.go:84] Creating CNI manager for ""
	I1212 00:31:51.128262  124345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1212 00:31:51.128346  124345 start.go:340] cluster config:
	{Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-492537 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:31:51.128544  124345 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:31:51.131189  124345 out.go:177] * Starting "multinode-492537" primary control-plane node in "multinode-492537" cluster
	I1212 00:31:51.132676  124345 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:31:51.132716  124345 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 00:31:51.132730  124345 cache.go:56] Caching tarball of preloaded images
	I1212 00:31:51.132794  124345 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:31:51.132806  124345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 00:31:51.132919  124345 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/config.json ...
	I1212 00:31:51.133098  124345 start.go:360] acquireMachinesLock for multinode-492537: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:31:51.133136  124345 start.go:364] duration metric: took 21.679µs to acquireMachinesLock for "multinode-492537"
	I1212 00:31:51.133151  124345 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:31:51.133160  124345 fix.go:54] fixHost starting: 
	I1212 00:31:51.133403  124345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:31:51.133434  124345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:31:51.147659  124345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I1212 00:31:51.148097  124345 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:31:51.148611  124345 main.go:141] libmachine: Using API Version  1
	I1212 00:31:51.148635  124345 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:31:51.148971  124345 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:31:51.149147  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:31:51.149279  124345 main.go:141] libmachine: (multinode-492537) Calling .GetState
	I1212 00:31:51.150757  124345 fix.go:112] recreateIfNeeded on multinode-492537: state=Running err=<nil>
	W1212 00:31:51.150779  124345 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:31:51.153207  124345 out.go:177] * Updating the running kvm2 "multinode-492537" VM ...
	I1212 00:31:51.154454  124345 machine.go:93] provisionDockerMachine start ...
	I1212 00:31:51.154475  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:31:51.154684  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.157075  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.157470  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.157502  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.157621  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.157761  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.157870  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.157978  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.158113  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.158378  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.158396  124345 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 00:31:51.272751  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-492537
	
	I1212 00:31:51.272782  124345 main.go:141] libmachine: (multinode-492537) Calling .GetMachineName
	I1212 00:31:51.273034  124345 buildroot.go:166] provisioning hostname "multinode-492537"
	I1212 00:31:51.273068  124345 main.go:141] libmachine: (multinode-492537) Calling .GetMachineName
	I1212 00:31:51.273260  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.275899  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.276230  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.276266  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.276342  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.276513  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.276664  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.276785  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.276947  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.277157  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.277175  124345 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-492537 && echo "multinode-492537" | sudo tee /etc/hostname
	I1212 00:31:51.403928  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-492537
	
	I1212 00:31:51.403964  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.406649  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.407027  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.407056  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.407230  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.407383  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.407548  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.407687  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.407846  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.408013  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.408029  124345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-492537' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-492537/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-492537' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:31:51.516444  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:31:51.516479  124345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:31:51.516520  124345 buildroot.go:174] setting up certificates
	I1212 00:31:51.516530  124345 provision.go:84] configureAuth start
	I1212 00:31:51.516541  124345 main.go:141] libmachine: (multinode-492537) Calling .GetMachineName
	I1212 00:31:51.516816  124345 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:31:51.519494  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.519845  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.519866  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.520017  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.521931  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.522235  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.522270  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.522369  124345 provision.go:143] copyHostCerts
	I1212 00:31:51.522422  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:31:51.522461  124345 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:31:51.522483  124345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:31:51.522560  124345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:31:51.522691  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:31:51.522720  124345 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:31:51.522728  124345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:31:51.522768  124345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:31:51.522847  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:31:51.522870  124345 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:31:51.522877  124345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:31:51.522914  124345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:31:51.522996  124345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.multinode-492537 san=[127.0.0.1 192.168.39.208 localhost minikube multinode-492537]
	I1212 00:31:51.666722  124345 provision.go:177] copyRemoteCerts
	I1212 00:31:51.666819  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:31:51.666853  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.669491  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.669802  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.669829  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.669988  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.670161  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.670310  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.670446  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:31:51.754283  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 00:31:51.754367  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:31:51.779717  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 00:31:51.779790  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1212 00:31:51.805182  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 00:31:51.805266  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:31:51.830435  124345 provision.go:87] duration metric: took 313.887878ms to configureAuth
	I1212 00:31:51.830465  124345 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:31:51.830736  124345 config.go:182] Loaded profile config "multinode-492537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:31:51.830826  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:31:51.833574  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.833978  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:31:51.834006  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:31:51.834159  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:31:51.834326  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.834467  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:31:51.834580  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:31:51.834752  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:31:51.834970  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:31:51.834990  124345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:33:22.702659  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:33:22.702697  124345 machine.go:96] duration metric: took 1m31.548227464s to provisionDockerMachine
	I1212 00:33:22.702712  124345 start.go:293] postStartSetup for "multinode-492537" (driver="kvm2")
	I1212 00:33:22.702723  124345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:33:22.702743  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.703156  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:33:22.703202  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.706446  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.706870  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.706899  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.707072  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.707255  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.707409  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.707581  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:33:22.795827  124345 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:33:22.800388  124345 command_runner.go:130] > NAME=Buildroot
	I1212 00:33:22.800409  124345 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1212 00:33:22.800414  124345 command_runner.go:130] > ID=buildroot
	I1212 00:33:22.800418  124345 command_runner.go:130] > VERSION_ID=2023.02.9
	I1212 00:33:22.800423  124345 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1212 00:33:22.800453  124345 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:33:22.800469  124345 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:33:22.800532  124345 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:33:22.800607  124345 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:33:22.800617  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /etc/ssl/certs/936002.pem
	I1212 00:33:22.800695  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:33:22.811091  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:33:22.835606  124345 start.go:296] duration metric: took 132.865141ms for postStartSetup
	I1212 00:33:22.835660  124345 fix.go:56] duration metric: took 1m31.702501396s for fixHost
	I1212 00:33:22.835684  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.838720  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.839129  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.839177  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.839305  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.839519  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.839696  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.839850  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.840063  124345 main.go:141] libmachine: Using SSH client type: native
	I1212 00:33:22.840238  124345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I1212 00:33:22.840248  124345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:33:22.948415  124345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733963602.916067935
	
	I1212 00:33:22.948444  124345 fix.go:216] guest clock: 1733963602.916067935
	I1212 00:33:22.948452  124345 fix.go:229] Guest: 2024-12-12 00:33:22.916067935 +0000 UTC Remote: 2024-12-12 00:33:22.835666506 +0000 UTC m=+91.830075377 (delta=80.401429ms)
	I1212 00:33:22.948471  124345 fix.go:200] guest clock delta is within tolerance: 80.401429ms
	I1212 00:33:22.948477  124345 start.go:83] releasing machines lock for "multinode-492537", held for 1m31.815331691s
	I1212 00:33:22.948495  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.948773  124345 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:33:22.951327  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.951762  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.951785  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.951941  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.952462  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.952631  124345 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:33:22.952731  124345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:33:22.952782  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.952891  124345 ssh_runner.go:195] Run: cat /version.json
	I1212 00:33:22.952919  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:33:22.955472  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.955623  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.955856  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.955883  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.955932  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:22.955950  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:22.956033  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.956213  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:33:22.956224  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.956446  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:33:22.956459  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.956622  124345 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:33:22.956651  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:33:22.956767  124345 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:33:23.036772  124345 command_runner.go:130] > {"iso_version": "v1.34.0-1733936888-20083", "kicbase_version": "v0.0.45-1733912881-20083", "minikube_version": "v1.34.0", "commit": "c120d5e16c3cccce289808bdfc18c123105e3e3b"}
	I1212 00:33:23.037069  124345 ssh_runner.go:195] Run: systemctl --version
	I1212 00:33:23.064573  124345 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 00:33:23.064629  124345 command_runner.go:130] > systemd 252 (252)
	I1212 00:33:23.064669  124345 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1212 00:33:23.064732  124345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:33:23.231649  124345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 00:33:23.237725  124345 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 00:33:23.237855  124345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:33:23.237919  124345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:33:23.247775  124345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:33:23.247794  124345 start.go:495] detecting cgroup driver to use...
	I1212 00:33:23.247867  124345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:33:23.267920  124345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:33:23.282371  124345 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:33:23.282433  124345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:33:23.298269  124345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:33:23.313631  124345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:33:23.468777  124345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:33:23.608040  124345 docker.go:233] disabling docker service ...
	I1212 00:33:23.608130  124345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:33:23.626148  124345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:33:23.640288  124345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:33:23.778080  124345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:33:23.916990  124345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:33:23.931199  124345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:33:23.950997  124345 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 00:33:23.951041  124345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:33:23.951087  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.961760  124345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:33:23.961826  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.972328  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.982729  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:23.993068  124345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:33:24.009012  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:24.019688  124345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:24.031220  124345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:33:24.043581  124345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:33:24.055385  124345 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 00:33:24.055509  124345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:33:24.065735  124345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.213954  124345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:33:24.419201  124345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:33:24.419290  124345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:33:24.424492  124345 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 00:33:24.424518  124345 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 00:33:24.424527  124345 command_runner.go:130] > Device: 0,22	Inode: 1290        Links: 1
	I1212 00:33:24.424586  124345 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:33:24.424618  124345 command_runner.go:130] > Access: 2024-12-12 00:33:24.278233514 +0000
	I1212 00:33:24.424627  124345 command_runner.go:130] > Modify: 2024-12-12 00:33:24.278233514 +0000
	I1212 00:33:24.424632  124345 command_runner.go:130] > Change: 2024-12-12 00:33:24.278233514 +0000
	I1212 00:33:24.424636  124345 command_runner.go:130] >  Birth: -
	I1212 00:33:24.424661  124345 start.go:563] Will wait 60s for crictl version
	I1212 00:33:24.424710  124345 ssh_runner.go:195] Run: which crictl
	I1212 00:33:24.428658  124345 command_runner.go:130] > /usr/bin/crictl
	I1212 00:33:24.428716  124345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:33:24.468255  124345 command_runner.go:130] > Version:  0.1.0
	I1212 00:33:24.468286  124345 command_runner.go:130] > RuntimeName:  cri-o
	I1212 00:33:24.468292  124345 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1212 00:33:24.468479  124345 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 00:33:24.469749  124345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:33:24.469822  124345 ssh_runner.go:195] Run: crio --version
	I1212 00:33:24.499118  124345 command_runner.go:130] > crio version 1.29.1
	I1212 00:33:24.499143  124345 command_runner.go:130] > Version:        1.29.1
	I1212 00:33:24.499148  124345 command_runner.go:130] > GitCommit:      unknown
	I1212 00:33:24.499152  124345 command_runner.go:130] > GitCommitDate:  unknown
	I1212 00:33:24.499156  124345 command_runner.go:130] > GitTreeState:   clean
	I1212 00:33:24.499161  124345 command_runner.go:130] > BuildDate:      2024-12-11T22:36:45Z
	I1212 00:33:24.499165  124345 command_runner.go:130] > GoVersion:      go1.21.6
	I1212 00:33:24.499169  124345 command_runner.go:130] > Compiler:       gc
	I1212 00:33:24.499173  124345 command_runner.go:130] > Platform:       linux/amd64
	I1212 00:33:24.499177  124345 command_runner.go:130] > Linkmode:       dynamic
	I1212 00:33:24.499181  124345 command_runner.go:130] > BuildTags:      
	I1212 00:33:24.499187  124345 command_runner.go:130] >   containers_image_ostree_stub
	I1212 00:33:24.499194  124345 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1212 00:33:24.499202  124345 command_runner.go:130] >   btrfs_noversion
	I1212 00:33:24.499210  124345 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1212 00:33:24.499217  124345 command_runner.go:130] >   libdm_no_deferred_remove
	I1212 00:33:24.499227  124345 command_runner.go:130] >   seccomp
	I1212 00:33:24.499234  124345 command_runner.go:130] > LDFlags:          unknown
	I1212 00:33:24.499242  124345 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:33:24.499249  124345 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:33:24.499371  124345 ssh_runner.go:195] Run: crio --version
	I1212 00:33:24.528540  124345 command_runner.go:130] > crio version 1.29.1
	I1212 00:33:24.528569  124345 command_runner.go:130] > Version:        1.29.1
	I1212 00:33:24.528576  124345 command_runner.go:130] > GitCommit:      unknown
	I1212 00:33:24.528580  124345 command_runner.go:130] > GitCommitDate:  unknown
	I1212 00:33:24.528583  124345 command_runner.go:130] > GitTreeState:   clean
	I1212 00:33:24.528591  124345 command_runner.go:130] > BuildDate:      2024-12-11T22:36:45Z
	I1212 00:33:24.528597  124345 command_runner.go:130] > GoVersion:      go1.21.6
	I1212 00:33:24.528605  124345 command_runner.go:130] > Compiler:       gc
	I1212 00:33:24.528613  124345 command_runner.go:130] > Platform:       linux/amd64
	I1212 00:33:24.528624  124345 command_runner.go:130] > Linkmode:       dynamic
	I1212 00:33:24.528634  124345 command_runner.go:130] > BuildTags:      
	I1212 00:33:24.528640  124345 command_runner.go:130] >   containers_image_ostree_stub
	I1212 00:33:24.528650  124345 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1212 00:33:24.528657  124345 command_runner.go:130] >   btrfs_noversion
	I1212 00:33:24.528668  124345 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1212 00:33:24.528673  124345 command_runner.go:130] >   libdm_no_deferred_remove
	I1212 00:33:24.528678  124345 command_runner.go:130] >   seccomp
	I1212 00:33:24.528685  124345 command_runner.go:130] > LDFlags:          unknown
	I1212 00:33:24.528696  124345 command_runner.go:130] > SeccompEnabled:   true
	I1212 00:33:24.528703  124345 command_runner.go:130] > AppArmorEnabled:  false
	I1212 00:33:24.530706  124345 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:33:24.532267  124345 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:33:24.535018  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:24.535407  124345 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:33:24.535430  124345 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:33:24.535662  124345 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:33:24.539920  124345 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1212 00:33:24.540024  124345 kubeadm.go:883] updating cluster {Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:33:24.540170  124345 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:33:24.540212  124345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.581953  124345 command_runner.go:130] > {
	I1212 00:33:24.581982  124345 command_runner.go:130] >   "images": [
	I1212 00:33:24.581987  124345 command_runner.go:130] >     {
	I1212 00:33:24.581999  124345 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1212 00:33:24.582003  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582011  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1212 00:33:24.582014  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582018  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582028  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1212 00:33:24.582039  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1212 00:33:24.582044  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582052  124345 command_runner.go:130] >       "size": "94965812",
	I1212 00:33:24.582061  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582069  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582077  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582082  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582086  124345 command_runner.go:130] >     },
	I1212 00:33:24.582089  124345 command_runner.go:130] >     {
	I1212 00:33:24.582095  124345 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1212 00:33:24.582099  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582114  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1212 00:33:24.582117  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582123  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582220  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1212 00:33:24.582236  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1212 00:33:24.582245  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582250  124345 command_runner.go:130] >       "size": "94963761",
	I1212 00:33:24.582256  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582274  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582285  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582292  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582298  124345 command_runner.go:130] >     },
	I1212 00:33:24.582309  124345 command_runner.go:130] >     {
	I1212 00:33:24.582321  124345 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1212 00:33:24.582331  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582340  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1212 00:33:24.582345  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582359  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582375  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1212 00:33:24.582390  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1212 00:33:24.582399  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582407  124345 command_runner.go:130] >       "size": "1363676",
	I1212 00:33:24.582416  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582426  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582433  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582439  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582449  124345 command_runner.go:130] >     },
	I1212 00:33:24.582458  124345 command_runner.go:130] >     {
	I1212 00:33:24.582471  124345 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 00:33:24.582480  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582490  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:33:24.582499  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582509  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582521  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 00:33:24.582542  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 00:33:24.582552  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582561  124345 command_runner.go:130] >       "size": "31470524",
	I1212 00:33:24.582571  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582580  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582590  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582599  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582606  124345 command_runner.go:130] >     },
	I1212 00:33:24.582609  124345 command_runner.go:130] >     {
	I1212 00:33:24.582620  124345 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1212 00:33:24.582630  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582644  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1212 00:33:24.582653  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582660  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582674  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1212 00:33:24.582689  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1212 00:33:24.582696  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582703  124345 command_runner.go:130] >       "size": "63273227",
	I1212 00:33:24.582712  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.582720  124345 command_runner.go:130] >       "username": "nonroot",
	I1212 00:33:24.582730  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582737  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582746  124345 command_runner.go:130] >     },
	I1212 00:33:24.582751  124345 command_runner.go:130] >     {
	I1212 00:33:24.582764  124345 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1212 00:33:24.582774  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582782  124345 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1212 00:33:24.582788  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582798  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582814  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1212 00:33:24.582828  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1212 00:33:24.582837  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582847  124345 command_runner.go:130] >       "size": "149009664",
	I1212 00:33:24.582856  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.582864  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.582870  124345 command_runner.go:130] >       },
	I1212 00:33:24.582876  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.582886  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.582897  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.582905  124345 command_runner.go:130] >     },
	I1212 00:33:24.582913  124345 command_runner.go:130] >     {
	I1212 00:33:24.582926  124345 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1212 00:33:24.582935  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.582944  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1212 00:33:24.582954  124345 command_runner.go:130] >       ],
	I1212 00:33:24.582959  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.582974  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1212 00:33:24.582991  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1212 00:33:24.582999  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583006  124345 command_runner.go:130] >       "size": "95274464",
	I1212 00:33:24.583015  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583022  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.583031  124345 command_runner.go:130] >       },
	I1212 00:33:24.583038  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583044  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583050  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583059  124345 command_runner.go:130] >     },
	I1212 00:33:24.583068  124345 command_runner.go:130] >     {
	I1212 00:33:24.583092  124345 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1212 00:33:24.583102  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583110  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1212 00:33:24.583116  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583121  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583141  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1212 00:33:24.583154  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1212 00:33:24.583160  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583167  124345 command_runner.go:130] >       "size": "89474374",
	I1212 00:33:24.583173  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583180  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.583185  124345 command_runner.go:130] >       },
	I1212 00:33:24.583196  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583202  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583209  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583214  124345 command_runner.go:130] >     },
	I1212 00:33:24.583218  124345 command_runner.go:130] >     {
	I1212 00:33:24.583225  124345 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1212 00:33:24.583232  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583241  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1212 00:33:24.583250  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583257  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583273  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1212 00:33:24.583288  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1212 00:33:24.583297  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583303  124345 command_runner.go:130] >       "size": "92783513",
	I1212 00:33:24.583310  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.583317  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583327  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583334  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583343  124345 command_runner.go:130] >     },
	I1212 00:33:24.583353  124345 command_runner.go:130] >     {
	I1212 00:33:24.583366  124345 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1212 00:33:24.583371  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583378  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1212 00:33:24.583384  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583391  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583407  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1212 00:33:24.583421  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1212 00:33:24.583430  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583438  124345 command_runner.go:130] >       "size": "68457798",
	I1212 00:33:24.583447  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583455  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.583461  124345 command_runner.go:130] >       },
	I1212 00:33:24.583469  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583475  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583482  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.583489  124345 command_runner.go:130] >     },
	I1212 00:33:24.583495  124345 command_runner.go:130] >     {
	I1212 00:33:24.583508  124345 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1212 00:33:24.583515  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.583524  124345 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1212 00:33:24.583532  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583543  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.583555  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1212 00:33:24.583570  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1212 00:33:24.583579  124345 command_runner.go:130] >       ],
	I1212 00:33:24.583586  124345 command_runner.go:130] >       "size": "742080",
	I1212 00:33:24.583618  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.583631  124345 command_runner.go:130] >         "value": "65535"
	I1212 00:33:24.583640  124345 command_runner.go:130] >       },
	I1212 00:33:24.583648  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.583657  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.583664  124345 command_runner.go:130] >       "pinned": true
	I1212 00:33:24.583672  124345 command_runner.go:130] >     }
	I1212 00:33:24.583679  124345 command_runner.go:130] >   ]
	I1212 00:33:24.583687  124345 command_runner.go:130] > }
	I1212 00:33:24.583983  124345 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.584006  124345 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:33:24.584081  124345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:33:24.617660  124345 command_runner.go:130] > {
	I1212 00:33:24.617682  124345 command_runner.go:130] >   "images": [
	I1212 00:33:24.617687  124345 command_runner.go:130] >     {
	I1212 00:33:24.617699  124345 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1212 00:33:24.617707  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.617716  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1212 00:33:24.617722  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617735  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.617746  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1212 00:33:24.617757  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1212 00:33:24.617761  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617765  124345 command_runner.go:130] >       "size": "94965812",
	I1212 00:33:24.617769  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.617773  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.617783  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.617792  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.617799  124345 command_runner.go:130] >     },
	I1212 00:33:24.617805  124345 command_runner.go:130] >     {
	I1212 00:33:24.617814  124345 command_runner.go:130] >       "id": "50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e",
	I1212 00:33:24.617821  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.617829  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241108-5c6d2daf"
	I1212 00:33:24.617837  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617843  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.617855  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3",
	I1212 00:33:24.617867  124345 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"
	I1212 00:33:24.617873  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617883  124345 command_runner.go:130] >       "size": "94963761",
	I1212 00:33:24.617890  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.617905  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.617914  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.617923  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.617931  124345 command_runner.go:130] >     },
	I1212 00:33:24.617938  124345 command_runner.go:130] >     {
	I1212 00:33:24.617947  124345 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1212 00:33:24.617951  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.617961  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1212 00:33:24.617971  124345 command_runner.go:130] >       ],
	I1212 00:33:24.617981  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.617996  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1212 00:33:24.618011  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1212 00:33:24.618025  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618035  124345 command_runner.go:130] >       "size": "1363676",
	I1212 00:33:24.618042  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.618048  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618061  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618073  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618080  124345 command_runner.go:130] >     },
	I1212 00:33:24.618086  124345 command_runner.go:130] >     {
	I1212 00:33:24.618098  124345 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1212 00:33:24.618109  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618116  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 00:33:24.618125  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618132  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618145  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1212 00:33:24.618170  124345 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1212 00:33:24.618180  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618188  124345 command_runner.go:130] >       "size": "31470524",
	I1212 00:33:24.618198  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.618207  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618215  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618221  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618225  124345 command_runner.go:130] >     },
	I1212 00:33:24.618234  124345 command_runner.go:130] >     {
	I1212 00:33:24.618246  124345 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1212 00:33:24.618256  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618266  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1212 00:33:24.618275  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618283  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618298  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1212 00:33:24.618310  124345 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1212 00:33:24.618319  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618327  124345 command_runner.go:130] >       "size": "63273227",
	I1212 00:33:24.618344  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.618363  124345 command_runner.go:130] >       "username": "nonroot",
	I1212 00:33:24.618373  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618382  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618390  124345 command_runner.go:130] >     },
	I1212 00:33:24.618399  124345 command_runner.go:130] >     {
	I1212 00:33:24.618412  124345 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1212 00:33:24.618423  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618434  124345 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1212 00:33:24.618443  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618460  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618477  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1212 00:33:24.618484  124345 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1212 00:33:24.618490  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618496  124345 command_runner.go:130] >       "size": "149009664",
	I1212 00:33:24.618502  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.618509  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.618522  124345 command_runner.go:130] >       },
	I1212 00:33:24.618528  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618535  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618544  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618552  124345 command_runner.go:130] >     },
	I1212 00:33:24.618557  124345 command_runner.go:130] >     {
	I1212 00:33:24.618568  124345 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1212 00:33:24.618576  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618608  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1212 00:33:24.618621  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618628  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618640  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1212 00:33:24.618655  124345 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1212 00:33:24.618662  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618669  124345 command_runner.go:130] >       "size": "95274464",
	I1212 00:33:24.618679  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.618689  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.618705  124345 command_runner.go:130] >       },
	I1212 00:33:24.618715  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618724  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618734  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618743  124345 command_runner.go:130] >     },
	I1212 00:33:24.618749  124345 command_runner.go:130] >     {
	I1212 00:33:24.618760  124345 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1212 00:33:24.618770  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618783  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1212 00:33:24.618792  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618802  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618834  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1212 00:33:24.618849  124345 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1212 00:33:24.618856  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618866  124345 command_runner.go:130] >       "size": "89474374",
	I1212 00:33:24.618876  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.618885  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.618894  124345 command_runner.go:130] >       },
	I1212 00:33:24.618904  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.618913  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.618922  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.618926  124345 command_runner.go:130] >     },
	I1212 00:33:24.618930  124345 command_runner.go:130] >     {
	I1212 00:33:24.618943  124345 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1212 00:33:24.618950  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.618962  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1212 00:33:24.618970  124345 command_runner.go:130] >       ],
	I1212 00:33:24.618977  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.618992  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1212 00:33:24.619008  124345 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1212 00:33:24.619014  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619020  124345 command_runner.go:130] >       "size": "92783513",
	I1212 00:33:24.619029  124345 command_runner.go:130] >       "uid": null,
	I1212 00:33:24.619046  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.619055  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.619061  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.619069  124345 command_runner.go:130] >     },
	I1212 00:33:24.619078  124345 command_runner.go:130] >     {
	I1212 00:33:24.619088  124345 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1212 00:33:24.619097  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.619106  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1212 00:33:24.619115  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619125  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.619137  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1212 00:33:24.619152  124345 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1212 00:33:24.619162  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619171  124345 command_runner.go:130] >       "size": "68457798",
	I1212 00:33:24.619180  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.619186  124345 command_runner.go:130] >         "value": "0"
	I1212 00:33:24.619194  124345 command_runner.go:130] >       },
	I1212 00:33:24.619204  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.619214  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.619222  124345 command_runner.go:130] >       "pinned": false
	I1212 00:33:24.619231  124345 command_runner.go:130] >     },
	I1212 00:33:24.619239  124345 command_runner.go:130] >     {
	I1212 00:33:24.619252  124345 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1212 00:33:24.619261  124345 command_runner.go:130] >       "repoTags": [
	I1212 00:33:24.619272  124345 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1212 00:33:24.619278  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619285  124345 command_runner.go:130] >       "repoDigests": [
	I1212 00:33:24.619300  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1212 00:33:24.619314  124345 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1212 00:33:24.619323  124345 command_runner.go:130] >       ],
	I1212 00:33:24.619338  124345 command_runner.go:130] >       "size": "742080",
	I1212 00:33:24.619348  124345 command_runner.go:130] >       "uid": {
	I1212 00:33:24.619357  124345 command_runner.go:130] >         "value": "65535"
	I1212 00:33:24.619371  124345 command_runner.go:130] >       },
	I1212 00:33:24.619381  124345 command_runner.go:130] >       "username": "",
	I1212 00:33:24.619388  124345 command_runner.go:130] >       "spec": null,
	I1212 00:33:24.619398  124345 command_runner.go:130] >       "pinned": true
	I1212 00:33:24.619406  124345 command_runner.go:130] >     }
	I1212 00:33:24.619415  124345 command_runner.go:130] >   ]
	I1212 00:33:24.619423  124345 command_runner.go:130] > }
	I1212 00:33:24.619620  124345 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:33:24.619637  124345 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:33:24.619647  124345 kubeadm.go:934] updating node { 192.168.39.208 8443 v1.31.2 crio true true} ...
	I1212 00:33:24.619768  124345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-492537 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:33:24.619854  124345 ssh_runner.go:195] Run: crio config
	I1212 00:33:24.654076  124345 command_runner.go:130] ! time="2024-12-12 00:33:24.621740261Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1212 00:33:24.659298  124345 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 00:33:24.673071  124345 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 00:33:24.673099  124345 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 00:33:24.673106  124345 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 00:33:24.673109  124345 command_runner.go:130] > #
	I1212 00:33:24.673117  124345 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 00:33:24.673122  124345 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 00:33:24.673128  124345 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 00:33:24.673141  124345 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 00:33:24.673145  124345 command_runner.go:130] > # reload'.
	I1212 00:33:24.673151  124345 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 00:33:24.673157  124345 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 00:33:24.673164  124345 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 00:33:24.673170  124345 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 00:33:24.673174  124345 command_runner.go:130] > [crio]
	I1212 00:33:24.673180  124345 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 00:33:24.673185  124345 command_runner.go:130] > # containers images, in this directory.
	I1212 00:33:24.673189  124345 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1212 00:33:24.673200  124345 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 00:33:24.673207  124345 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1212 00:33:24.673214  124345 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 00:33:24.673220  124345 command_runner.go:130] > # imagestore = ""
	I1212 00:33:24.673226  124345 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 00:33:24.673237  124345 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 00:33:24.673241  124345 command_runner.go:130] > storage_driver = "overlay"
	I1212 00:33:24.673246  124345 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 00:33:24.673254  124345 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 00:33:24.673258  124345 command_runner.go:130] > storage_option = [
	I1212 00:33:24.673263  124345 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1212 00:33:24.673273  124345 command_runner.go:130] > ]
	I1212 00:33:24.673282  124345 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 00:33:24.673288  124345 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 00:33:24.673295  124345 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 00:33:24.673300  124345 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 00:33:24.673308  124345 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 00:33:24.673312  124345 command_runner.go:130] > # always happen on a node reboot
	I1212 00:33:24.673317  124345 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 00:33:24.673329  124345 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 00:33:24.673337  124345 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 00:33:24.673343  124345 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 00:33:24.673348  124345 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1212 00:33:24.673355  124345 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 00:33:24.673365  124345 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 00:33:24.673369  124345 command_runner.go:130] > # internal_wipe = true
	I1212 00:33:24.673376  124345 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 00:33:24.673386  124345 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 00:33:24.673390  124345 command_runner.go:130] > # internal_repair = false
	I1212 00:33:24.673395  124345 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 00:33:24.673403  124345 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 00:33:24.673409  124345 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 00:33:24.673415  124345 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 00:33:24.673421  124345 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 00:33:24.673427  124345 command_runner.go:130] > [crio.api]
	I1212 00:33:24.673432  124345 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 00:33:24.673437  124345 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 00:33:24.673442  124345 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 00:33:24.673452  124345 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 00:33:24.673458  124345 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 00:33:24.673466  124345 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 00:33:24.673469  124345 command_runner.go:130] > # stream_port = "0"
	I1212 00:33:24.673474  124345 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 00:33:24.673481  124345 command_runner.go:130] > # stream_enable_tls = false
	I1212 00:33:24.673487  124345 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 00:33:24.673492  124345 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 00:33:24.673500  124345 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 00:33:24.673508  124345 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1212 00:33:24.673512  124345 command_runner.go:130] > # minutes.
	I1212 00:33:24.673519  124345 command_runner.go:130] > # stream_tls_cert = ""
	I1212 00:33:24.673524  124345 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 00:33:24.673533  124345 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1212 00:33:24.673537  124345 command_runner.go:130] > # stream_tls_key = ""
	I1212 00:33:24.673545  124345 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 00:33:24.673552  124345 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 00:33:24.673572  124345 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1212 00:33:24.673579  124345 command_runner.go:130] > # stream_tls_ca = ""
	I1212 00:33:24.673586  124345 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:33:24.673590  124345 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1212 00:33:24.673597  124345 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 00:33:24.673604  124345 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1212 00:33:24.673609  124345 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 00:33:24.673617  124345 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 00:33:24.673621  124345 command_runner.go:130] > [crio.runtime]
	I1212 00:33:24.673627  124345 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 00:33:24.673634  124345 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 00:33:24.673639  124345 command_runner.go:130] > # "nofile=1024:2048"
	I1212 00:33:24.673647  124345 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 00:33:24.673650  124345 command_runner.go:130] > # default_ulimits = [
	I1212 00:33:24.673653  124345 command_runner.go:130] > # ]
	I1212 00:33:24.673659  124345 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 00:33:24.673668  124345 command_runner.go:130] > # no_pivot = false
	I1212 00:33:24.673675  124345 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 00:33:24.673681  124345 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 00:33:24.673688  124345 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 00:33:24.673693  124345 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 00:33:24.673700  124345 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 00:33:24.673708  124345 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:33:24.673716  124345 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1212 00:33:24.673721  124345 command_runner.go:130] > # Cgroup setting for conmon
	I1212 00:33:24.673728  124345 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 00:33:24.673734  124345 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 00:33:24.673740  124345 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 00:33:24.673745  124345 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 00:33:24.673755  124345 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 00:33:24.673761  124345 command_runner.go:130] > conmon_env = [
	I1212 00:33:24.673766  124345 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 00:33:24.673770  124345 command_runner.go:130] > ]
	I1212 00:33:24.673775  124345 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 00:33:24.673781  124345 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 00:33:24.673787  124345 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 00:33:24.673794  124345 command_runner.go:130] > # default_env = [
	I1212 00:33:24.673798  124345 command_runner.go:130] > # ]
	I1212 00:33:24.673805  124345 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 00:33:24.673812  124345 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 00:33:24.673817  124345 command_runner.go:130] > # selinux = false
	I1212 00:33:24.673823  124345 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 00:33:24.673832  124345 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1212 00:33:24.673837  124345 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1212 00:33:24.673843  124345 command_runner.go:130] > # seccomp_profile = ""
	I1212 00:33:24.673849  124345 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1212 00:33:24.673855  124345 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1212 00:33:24.673862  124345 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1212 00:33:24.673866  124345 command_runner.go:130] > # which might increase security.
	I1212 00:33:24.673871  124345 command_runner.go:130] > # This option is currently deprecated,
	I1212 00:33:24.673879  124345 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1212 00:33:24.673883  124345 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1212 00:33:24.673891  124345 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 00:33:24.673897  124345 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 00:33:24.673905  124345 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 00:33:24.673911  124345 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 00:33:24.673918  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.673922  124345 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 00:33:24.673931  124345 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 00:33:24.673935  124345 command_runner.go:130] > # the cgroup blockio controller.
	I1212 00:33:24.673942  124345 command_runner.go:130] > # blockio_config_file = ""
	I1212 00:33:24.673948  124345 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 00:33:24.673952  124345 command_runner.go:130] > # blockio parameters.
	I1212 00:33:24.673956  124345 command_runner.go:130] > # blockio_reload = false
	I1212 00:33:24.673962  124345 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 00:33:24.673968  124345 command_runner.go:130] > # irqbalance daemon.
	I1212 00:33:24.673972  124345 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 00:33:24.673981  124345 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 00:33:24.673989  124345 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 00:33:24.673996  124345 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 00:33:24.674003  124345 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 00:33:24.674010  124345 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 00:33:24.674018  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.674024  124345 command_runner.go:130] > # rdt_config_file = ""
	I1212 00:33:24.674033  124345 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 00:33:24.674038  124345 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 00:33:24.674078  124345 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 00:33:24.674088  124345 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 00:33:24.674094  124345 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 00:33:24.674103  124345 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 00:33:24.674107  124345 command_runner.go:130] > # will be added.
	I1212 00:33:24.674111  124345 command_runner.go:130] > # default_capabilities = [
	I1212 00:33:24.674116  124345 command_runner.go:130] > # 	"CHOWN",
	I1212 00:33:24.674121  124345 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 00:33:24.674125  124345 command_runner.go:130] > # 	"FSETID",
	I1212 00:33:24.674131  124345 command_runner.go:130] > # 	"FOWNER",
	I1212 00:33:24.674134  124345 command_runner.go:130] > # 	"SETGID",
	I1212 00:33:24.674137  124345 command_runner.go:130] > # 	"SETUID",
	I1212 00:33:24.674143  124345 command_runner.go:130] > # 	"SETPCAP",
	I1212 00:33:24.674146  124345 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 00:33:24.674150  124345 command_runner.go:130] > # 	"KILL",
	I1212 00:33:24.674153  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674161  124345 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 00:33:24.674169  124345 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 00:33:24.674174  124345 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 00:33:24.674182  124345 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 00:33:24.674188  124345 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:33:24.674193  124345 command_runner.go:130] > default_sysctls = [
	I1212 00:33:24.674198  124345 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 00:33:24.674204  124345 command_runner.go:130] > ]
	I1212 00:33:24.674209  124345 command_runner.go:130] > # List of devices on the host that a
	I1212 00:33:24.674217  124345 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 00:33:24.674221  124345 command_runner.go:130] > # allowed_devices = [
	I1212 00:33:24.674225  124345 command_runner.go:130] > # 	"/dev/fuse",
	I1212 00:33:24.674228  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674233  124345 command_runner.go:130] > # List of additional devices. specified as
	I1212 00:33:24.674242  124345 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 00:33:24.674247  124345 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 00:33:24.674263  124345 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 00:33:24.674276  124345 command_runner.go:130] > # additional_devices = [
	I1212 00:33:24.674279  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674284  124345 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 00:33:24.674290  124345 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 00:33:24.674294  124345 command_runner.go:130] > # 	"/etc/cdi",
	I1212 00:33:24.674300  124345 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 00:33:24.674304  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674311  124345 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 00:33:24.674318  124345 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 00:33:24.674323  124345 command_runner.go:130] > # Defaults to false.
	I1212 00:33:24.674330  124345 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 00:33:24.674336  124345 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 00:33:24.674344  124345 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 00:33:24.674348  124345 command_runner.go:130] > # hooks_dir = [
	I1212 00:33:24.674352  124345 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 00:33:24.674358  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674363  124345 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 00:33:24.674372  124345 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 00:33:24.674377  124345 command_runner.go:130] > # its default mounts from the following two files:
	I1212 00:33:24.674380  124345 command_runner.go:130] > #
	I1212 00:33:24.674385  124345 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 00:33:24.674393  124345 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 00:33:24.674398  124345 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 00:33:24.674402  124345 command_runner.go:130] > #
	I1212 00:33:24.674408  124345 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 00:33:24.674417  124345 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 00:33:24.674423  124345 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 00:33:24.674430  124345 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 00:33:24.674433  124345 command_runner.go:130] > #
	I1212 00:33:24.674439  124345 command_runner.go:130] > # default_mounts_file = ""
	I1212 00:33:24.674444  124345 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 00:33:24.674453  124345 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 00:33:24.674457  124345 command_runner.go:130] > pids_limit = 1024
	I1212 00:33:24.674465  124345 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 00:33:24.674471  124345 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 00:33:24.674477  124345 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 00:33:24.674485  124345 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 00:33:24.674490  124345 command_runner.go:130] > # log_size_max = -1
	I1212 00:33:24.674499  124345 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 00:33:24.674507  124345 command_runner.go:130] > # log_to_journald = false
	I1212 00:33:24.674513  124345 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 00:33:24.674520  124345 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 00:33:24.674525  124345 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 00:33:24.674530  124345 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 00:33:24.674535  124345 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 00:33:24.674541  124345 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 00:33:24.674546  124345 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 00:33:24.674552  124345 command_runner.go:130] > # read_only = false
	I1212 00:33:24.674558  124345 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 00:33:24.674566  124345 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 00:33:24.674570  124345 command_runner.go:130] > # live configuration reload.
	I1212 00:33:24.674576  124345 command_runner.go:130] > # log_level = "info"
	I1212 00:33:24.674582  124345 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 00:33:24.674588  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.674591  124345 command_runner.go:130] > # log_filter = ""
	I1212 00:33:24.674597  124345 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 00:33:24.674607  124345 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 00:33:24.674613  124345 command_runner.go:130] > # separated by comma.
	I1212 00:33:24.674621  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674627  124345 command_runner.go:130] > # uid_mappings = ""
	I1212 00:33:24.674632  124345 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 00:33:24.674638  124345 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 00:33:24.674644  124345 command_runner.go:130] > # separated by comma.
	I1212 00:33:24.674651  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674657  124345 command_runner.go:130] > # gid_mappings = ""
	I1212 00:33:24.674663  124345 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 00:33:24.674671  124345 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:33:24.674677  124345 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:33:24.674687  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674691  124345 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 00:33:24.674697  124345 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 00:33:24.674704  124345 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 00:33:24.674713  124345 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 00:33:24.674721  124345 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 00:33:24.674729  124345 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 00:33:24.674735  124345 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 00:33:24.674740  124345 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 00:33:24.674748  124345 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 00:33:24.674752  124345 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 00:33:24.674758  124345 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 00:33:24.674766  124345 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 00:33:24.674771  124345 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 00:33:24.674778  124345 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 00:33:24.674781  124345 command_runner.go:130] > drop_infra_ctr = false
	I1212 00:33:24.674787  124345 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 00:33:24.674795  124345 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 00:33:24.674802  124345 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 00:33:24.674808  124345 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 00:33:24.674815  124345 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 00:33:24.674823  124345 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 00:33:24.674828  124345 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 00:33:24.674835  124345 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 00:33:24.674840  124345 command_runner.go:130] > # shared_cpuset = ""
	I1212 00:33:24.674847  124345 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 00:33:24.674852  124345 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 00:33:24.674858  124345 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 00:33:24.674864  124345 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 00:33:24.674868  124345 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1212 00:33:24.674874  124345 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 00:33:24.674882  124345 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 00:33:24.674886  124345 command_runner.go:130] > # enable_criu_support = false
	I1212 00:33:24.674893  124345 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 00:33:24.674899  124345 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 00:33:24.674906  124345 command_runner.go:130] > # enable_pod_events = false
	I1212 00:33:24.674911  124345 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:33:24.674919  124345 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 00:33:24.674924  124345 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 00:33:24.674930  124345 command_runner.go:130] > # default_runtime = "runc"
	I1212 00:33:24.674934  124345 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 00:33:24.674941  124345 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 00:33:24.674952  124345 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 00:33:24.674961  124345 command_runner.go:130] > # creation as a file is not desired either.
	I1212 00:33:24.674971  124345 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 00:33:24.674977  124345 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 00:33:24.674982  124345 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 00:33:24.674988  124345 command_runner.go:130] > # ]
	I1212 00:33:24.674994  124345 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 00:33:24.675002  124345 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 00:33:24.675009  124345 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 00:33:24.675017  124345 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 00:33:24.675021  124345 command_runner.go:130] > #
	I1212 00:33:24.675025  124345 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 00:33:24.675032  124345 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 00:33:24.675053  124345 command_runner.go:130] > # runtime_type = "oci"
	I1212 00:33:24.675060  124345 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 00:33:24.675064  124345 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 00:33:24.675070  124345 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 00:33:24.675075  124345 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 00:33:24.675085  124345 command_runner.go:130] > # monitor_env = []
	I1212 00:33:24.675090  124345 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 00:33:24.675096  124345 command_runner.go:130] > # allowed_annotations = []
	I1212 00:33:24.675101  124345 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 00:33:24.675107  124345 command_runner.go:130] > # Where:
	I1212 00:33:24.675112  124345 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 00:33:24.675118  124345 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 00:33:24.675126  124345 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 00:33:24.675132  124345 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 00:33:24.675138  124345 command_runner.go:130] > #   in $PATH.
	I1212 00:33:24.675145  124345 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 00:33:24.675150  124345 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 00:33:24.675156  124345 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 00:33:24.675164  124345 command_runner.go:130] > #   state.
	I1212 00:33:24.675173  124345 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 00:33:24.675183  124345 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 00:33:24.675192  124345 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 00:33:24.675198  124345 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 00:33:24.675206  124345 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 00:33:24.675216  124345 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 00:33:24.675230  124345 command_runner.go:130] > #   The currently recognized values are:
	I1212 00:33:24.675241  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 00:33:24.675255  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 00:33:24.675272  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 00:33:24.675286  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 00:33:24.675299  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 00:33:24.675312  124345 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 00:33:24.675323  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 00:33:24.675334  124345 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 00:33:24.675349  124345 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 00:33:24.675363  124345 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 00:33:24.675373  124345 command_runner.go:130] > #   deprecated option "conmon".
	I1212 00:33:24.675385  124345 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 00:33:24.675396  124345 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 00:33:24.675410  124345 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 00:33:24.675422  124345 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 00:33:24.675437  124345 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1212 00:33:24.675449  124345 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 00:33:24.675463  124345 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 00:33:24.675477  124345 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 00:33:24.675483  124345 command_runner.go:130] > #
	I1212 00:33:24.675494  124345 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 00:33:24.675503  124345 command_runner.go:130] > #
	I1212 00:33:24.675515  124345 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 00:33:24.675530  124345 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 00:33:24.675539  124345 command_runner.go:130] > #
	I1212 00:33:24.675550  124345 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 00:33:24.675564  124345 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 00:33:24.675572  124345 command_runner.go:130] > #
	I1212 00:33:24.675583  124345 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 00:33:24.675603  124345 command_runner.go:130] > # feature.
	I1212 00:33:24.675609  124345 command_runner.go:130] > #
	I1212 00:33:24.675621  124345 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 00:33:24.675634  124345 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 00:33:24.675648  124345 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 00:33:24.675665  124345 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 00:33:24.675679  124345 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 00:33:24.675687  124345 command_runner.go:130] > #
	I1212 00:33:24.675698  124345 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 00:33:24.675711  124345 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 00:33:24.675717  124345 command_runner.go:130] > #
	I1212 00:33:24.675731  124345 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 00:33:24.675744  124345 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 00:33:24.675752  124345 command_runner.go:130] > #
	I1212 00:33:24.675762  124345 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 00:33:24.675776  124345 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 00:33:24.675785  124345 command_runner.go:130] > # limitation.
	I1212 00:33:24.675798  124345 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 00:33:24.675807  124345 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1212 00:33:24.675814  124345 command_runner.go:130] > runtime_type = "oci"
	I1212 00:33:24.675822  124345 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 00:33:24.675832  124345 command_runner.go:130] > runtime_config_path = ""
	I1212 00:33:24.675841  124345 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 00:33:24.675851  124345 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 00:33:24.675859  124345 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 00:33:24.675866  124345 command_runner.go:130] > monitor_env = [
	I1212 00:33:24.675878  124345 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1212 00:33:24.675887  124345 command_runner.go:130] > ]
	I1212 00:33:24.675896  124345 command_runner.go:130] > privileged_without_host_devices = false
	I1212 00:33:24.675910  124345 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 00:33:24.675922  124345 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 00:33:24.675936  124345 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 00:33:24.675952  124345 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 00:33:24.675969  124345 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1212 00:33:24.675981  124345 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 00:33:24.675997  124345 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 00:33:24.676012  124345 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 00:33:24.676022  124345 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 00:33:24.676032  124345 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 00:33:24.676037  124345 command_runner.go:130] > # Example:
	I1212 00:33:24.676043  124345 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 00:33:24.676049  124345 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 00:33:24.676059  124345 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 00:33:24.676066  124345 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 00:33:24.676073  124345 command_runner.go:130] > # cpuset = 0
	I1212 00:33:24.676081  124345 command_runner.go:130] > # cpushares = "0-1"
	I1212 00:33:24.676088  124345 command_runner.go:130] > # Where:
	I1212 00:33:24.676098  124345 command_runner.go:130] > # The workload name is workload-type.
	I1212 00:33:24.676109  124345 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 00:33:24.676118  124345 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 00:33:24.676128  124345 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 00:33:24.676141  124345 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 00:33:24.676154  124345 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 00:33:24.676165  124345 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 00:33:24.676179  124345 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 00:33:24.676190  124345 command_runner.go:130] > # Default value is set to true
	I1212 00:33:24.676201  124345 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 00:33:24.676213  124345 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 00:33:24.676225  124345 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 00:33:24.676238  124345 command_runner.go:130] > # Default value is set to 'false'
	I1212 00:33:24.676248  124345 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 00:33:24.676259  124345 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 00:33:24.676275  124345 command_runner.go:130] > #
	I1212 00:33:24.676289  124345 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 00:33:24.676302  124345 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1212 00:33:24.676316  124345 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1212 00:33:24.676330  124345 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1212 00:33:24.676343  124345 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1212 00:33:24.676349  124345 command_runner.go:130] > [crio.image]
	I1212 00:33:24.676360  124345 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 00:33:24.676371  124345 command_runner.go:130] > # default_transport = "docker://"
	I1212 00:33:24.676385  124345 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 00:33:24.676399  124345 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:33:24.676409  124345 command_runner.go:130] > # global_auth_file = ""
	I1212 00:33:24.676418  124345 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 00:33:24.676428  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.676437  124345 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1212 00:33:24.676451  124345 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 00:33:24.676466  124345 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 00:33:24.676478  124345 command_runner.go:130] > # This option supports live configuration reload.
	I1212 00:33:24.676493  124345 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 00:33:24.676508  124345 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 00:33:24.676521  124345 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 00:33:24.676532  124345 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 00:33:24.676545  124345 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 00:33:24.676556  124345 command_runner.go:130] > # pause_command = "/pause"
	I1212 00:33:24.676569  124345 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 00:33:24.676582  124345 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 00:33:24.676596  124345 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 00:33:24.676612  124345 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 00:33:24.676626  124345 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 00:33:24.676639  124345 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 00:33:24.676651  124345 command_runner.go:130] > # pinned_images = [
	I1212 00:33:24.676661  124345 command_runner.go:130] > # ]
	I1212 00:33:24.676675  124345 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 00:33:24.676688  124345 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 00:33:24.676701  124345 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 00:33:24.676712  124345 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 00:33:24.676724  124345 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 00:33:24.676735  124345 command_runner.go:130] > # signature_policy = ""
	I1212 00:33:24.676747  124345 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 00:33:24.676762  124345 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 00:33:24.676776  124345 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 00:33:24.676790  124345 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 00:33:24.676800  124345 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 00:33:24.676819  124345 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 00:33:24.676833  124345 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 00:33:24.676847  124345 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 00:33:24.676857  124345 command_runner.go:130] > # changing them here.
	I1212 00:33:24.676866  124345 command_runner.go:130] > # insecure_registries = [
	I1212 00:33:24.676876  124345 command_runner.go:130] > # ]
	I1212 00:33:24.676888  124345 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 00:33:24.676900  124345 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 00:33:24.676910  124345 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 00:33:24.676921  124345 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 00:33:24.676930  124345 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 00:33:24.676946  124345 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 00:33:24.676955  124345 command_runner.go:130] > # CNI plugins.
	I1212 00:33:24.676962  124345 command_runner.go:130] > [crio.network]
	I1212 00:33:24.676976  124345 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 00:33:24.676990  124345 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 00:33:24.677000  124345 command_runner.go:130] > # cni_default_network = ""
	I1212 00:33:24.677013  124345 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 00:33:24.677024  124345 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 00:33:24.677035  124345 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 00:33:24.677046  124345 command_runner.go:130] > # plugin_dirs = [
	I1212 00:33:24.677056  124345 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 00:33:24.677064  124345 command_runner.go:130] > # ]
	I1212 00:33:24.677074  124345 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 00:33:24.677083  124345 command_runner.go:130] > [crio.metrics]
	I1212 00:33:24.677092  124345 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 00:33:24.677102  124345 command_runner.go:130] > enable_metrics = true
	I1212 00:33:24.677110  124345 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 00:33:24.677121  124345 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 00:33:24.677132  124345 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 00:33:24.677144  124345 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 00:33:24.677156  124345 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 00:33:24.677167  124345 command_runner.go:130] > # metrics_collectors = [
	I1212 00:33:24.677175  124345 command_runner.go:130] > # 	"operations",
	I1212 00:33:24.677187  124345 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1212 00:33:24.677198  124345 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1212 00:33:24.677208  124345 command_runner.go:130] > # 	"operations_errors",
	I1212 00:33:24.677216  124345 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1212 00:33:24.677225  124345 command_runner.go:130] > # 	"image_pulls_by_name",
	I1212 00:33:24.677234  124345 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1212 00:33:24.677246  124345 command_runner.go:130] > # 	"image_pulls_failures",
	I1212 00:33:24.677257  124345 command_runner.go:130] > # 	"image_pulls_successes",
	I1212 00:33:24.677274  124345 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 00:33:24.677284  124345 command_runner.go:130] > # 	"image_layer_reuse",
	I1212 00:33:24.677293  124345 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 00:33:24.677302  124345 command_runner.go:130] > # 	"containers_oom_total",
	I1212 00:33:24.677309  124345 command_runner.go:130] > # 	"containers_oom",
	I1212 00:33:24.677319  124345 command_runner.go:130] > # 	"processes_defunct",
	I1212 00:33:24.677326  124345 command_runner.go:130] > # 	"operations_total",
	I1212 00:33:24.677337  124345 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 00:33:24.677347  124345 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 00:33:24.677358  124345 command_runner.go:130] > # 	"operations_errors_total",
	I1212 00:33:24.677367  124345 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 00:33:24.677377  124345 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 00:33:24.677387  124345 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 00:33:24.677396  124345 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 00:33:24.677410  124345 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 00:33:24.677421  124345 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 00:33:24.677433  124345 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 00:33:24.677444  124345 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 00:33:24.677452  124345 command_runner.go:130] > # ]
	I1212 00:33:24.677461  124345 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 00:33:24.677469  124345 command_runner.go:130] > # metrics_port = 9090
	I1212 00:33:24.677479  124345 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 00:33:24.677488  124345 command_runner.go:130] > # metrics_socket = ""
	I1212 00:33:24.677498  124345 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 00:33:24.677512  124345 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 00:33:24.677526  124345 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 00:33:24.677537  124345 command_runner.go:130] > # certificate on any modification event.
	I1212 00:33:24.677545  124345 command_runner.go:130] > # metrics_cert = ""
	I1212 00:33:24.677555  124345 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 00:33:24.677566  124345 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 00:33:24.677574  124345 command_runner.go:130] > # metrics_key = ""
	I1212 00:33:24.677585  124345 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 00:33:24.677594  124345 command_runner.go:130] > [crio.tracing]
	I1212 00:33:24.677605  124345 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 00:33:24.677615  124345 command_runner.go:130] > # enable_tracing = false
	I1212 00:33:24.677628  124345 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 00:33:24.677637  124345 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1212 00:33:24.677649  124345 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 00:33:24.677661  124345 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 00:33:24.677671  124345 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 00:33:24.677679  124345 command_runner.go:130] > [crio.nri]
	I1212 00:33:24.677690  124345 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 00:33:24.677698  124345 command_runner.go:130] > # enable_nri = false
	I1212 00:33:24.677706  124345 command_runner.go:130] > # NRI socket to listen on.
	I1212 00:33:24.677718  124345 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 00:33:24.677728  124345 command_runner.go:130] > # NRI plugin directory to use.
	I1212 00:33:24.677736  124345 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 00:33:24.677748  124345 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 00:33:24.677760  124345 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 00:33:24.677773  124345 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 00:33:24.677784  124345 command_runner.go:130] > # nri_disable_connections = false
	I1212 00:33:24.677794  124345 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 00:33:24.677804  124345 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 00:33:24.677816  124345 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 00:33:24.677825  124345 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 00:33:24.677837  124345 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 00:33:24.677846  124345 command_runner.go:130] > [crio.stats]
	I1212 00:33:24.677860  124345 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 00:33:24.677872  124345 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 00:33:24.677883  124345 command_runner.go:130] > # stats_collection_period = 0
	I1212 00:33:24.678002  124345 cni.go:84] Creating CNI manager for ""
	I1212 00:33:24.678014  124345 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1212 00:33:24.678026  124345 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:33:24.678058  124345 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-492537 NodeName:multinode-492537 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:33:24.678230  124345 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-492537"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.208"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:33:24.678320  124345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:33:24.689740  124345 command_runner.go:130] > kubeadm
	I1212 00:33:24.689760  124345 command_runner.go:130] > kubectl
	I1212 00:33:24.689766  124345 command_runner.go:130] > kubelet
	I1212 00:33:24.689832  124345 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:33:24.689887  124345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:33:24.699709  124345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1212 00:33:24.717308  124345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:33:24.734383  124345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1212 00:33:24.751534  124345 ssh_runner.go:195] Run: grep 192.168.39.208	control-plane.minikube.internal$ /etc/hosts
	I1212 00:33:24.755436  124345 command_runner.go:130] > 192.168.39.208	control-plane.minikube.internal
	I1212 00:33:24.755513  124345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:33:24.902042  124345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:33:24.918633  124345 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537 for IP: 192.168.39.208
	I1212 00:33:24.918663  124345 certs.go:194] generating shared ca certs ...
	I1212 00:33:24.918692  124345 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:33:24.918876  124345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:33:24.918939  124345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:33:24.918953  124345 certs.go:256] generating profile certs ...
	I1212 00:33:24.919093  124345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/client.key
	I1212 00:33:24.919176  124345 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.key.ca4dfcaa
	I1212 00:33:24.919213  124345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.key
	I1212 00:33:24.919225  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 00:33:24.919237  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 00:33:24.919248  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 00:33:24.919258  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 00:33:24.919270  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 00:33:24.919280  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 00:33:24.919292  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 00:33:24.919308  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 00:33:24.919365  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:33:24.919394  124345 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:33:24.919406  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:33:24.919468  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:33:24.919496  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:33:24.919522  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:33:24.919563  124345 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:33:24.919588  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem -> /usr/share/ca-certificates/93600.pem
	I1212 00:33:24.919630  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> /usr/share/ca-certificates/936002.pem
	I1212 00:33:24.919646  124345 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:24.920273  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:33:24.944679  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:33:24.968838  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:33:24.992775  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:33:25.017458  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 00:33:25.041689  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:33:25.065746  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:33:25.089699  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/multinode-492537/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:33:25.113283  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:33:25.137363  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:33:25.161217  124345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:33:25.187064  124345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:33:25.204830  124345 ssh_runner.go:195] Run: openssl version
	I1212 00:33:25.210851  124345 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1212 00:33:25.210937  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:33:25.222030  124345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.226519  124345 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.226581  124345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.226632  124345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:33:25.232467  124345 command_runner.go:130] > 51391683
	I1212 00:33:25.232528  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:33:25.242888  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:33:25.253813  124345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.258641  124345 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.258674  124345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.258719  124345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:33:25.264449  124345 command_runner.go:130] > 3ec20f2e
	I1212 00:33:25.264507  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:33:25.273973  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:33:25.285194  124345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.289653  124345 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.289745  124345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.289793  124345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:33:25.295197  124345 command_runner.go:130] > b5213941
	I1212 00:33:25.295454  124345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:33:25.304934  124345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.309763  124345 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:33:25.309789  124345 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 00:33:25.309797  124345 command_runner.go:130] > Device: 253,1	Inode: 5244462     Links: 1
	I1212 00:33:25.309803  124345 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 00:33:25.309809  124345 command_runner.go:130] > Access: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309816  124345 command_runner.go:130] > Modify: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309821  124345 command_runner.go:130] > Change: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309826  124345 command_runner.go:130] >  Birth: 2024-12-12 00:26:24.410761101 +0000
	I1212 00:33:25.309864  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:33:25.315607  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.315673  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:33:25.321224  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.321427  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:33:25.327267  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.327329  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:33:25.333074  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.333134  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:33:25.338993  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.339040  124345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:33:25.344547  124345 command_runner.go:130] > Certificate will not expire
	I1212 00:33:25.344709  124345 kubeadm.go:392] StartCluster: {Name:multinode-492537 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-492537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.186 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.21 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:33:25.344838  124345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:33:25.344892  124345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:33:25.384164  124345 command_runner.go:130] > 6c36c43dca7710da2daac93c4db2b9fe66d56935018b5cfb719223ca69bfeceb
	I1212 00:33:25.384188  124345 command_runner.go:130] > 15858f7c6c1998582e3c864d38164bdc97c02cc7c5821a71997397e8517d8996
	I1212 00:33:25.384195  124345 command_runner.go:130] > 76d1bbad8679a30947a98033746896e9d29f70b0eebab4f3ba9847677d057322
	I1212 00:33:25.384201  124345 command_runner.go:130] > a256f99dfeb012a82928d5b602a902e808285131c2171f286bcafe4fd2e24393
	I1212 00:33:25.384206  124345 command_runner.go:130] > 1bcb1a5c48edaeda78c1d27f17cce1b209165b9af22bc7735d1657078fb0f1cc
	I1212 00:33:25.384211  124345 command_runner.go:130] > 4e22f073589e4167bb82c8d86d415e9c1ed9d121f86471cbde61732a2b45d146
	I1212 00:33:25.384217  124345 command_runner.go:130] > 02c9588db3283f504267742f31da7c57cb5950e15720f4243bf286f0cd58e583
	I1212 00:33:25.384233  124345 command_runner.go:130] > dd846a91091143c5ca25f344cb9f2fa60b447f24daca84e2adb65c98007ca3c3
	I1212 00:33:25.386343  124345 cri.go:89] found id: "6c36c43dca7710da2daac93c4db2b9fe66d56935018b5cfb719223ca69bfeceb"
	I1212 00:33:25.386363  124345 cri.go:89] found id: "15858f7c6c1998582e3c864d38164bdc97c02cc7c5821a71997397e8517d8996"
	I1212 00:33:25.386366  124345 cri.go:89] found id: "76d1bbad8679a30947a98033746896e9d29f70b0eebab4f3ba9847677d057322"
	I1212 00:33:25.386369  124345 cri.go:89] found id: "a256f99dfeb012a82928d5b602a902e808285131c2171f286bcafe4fd2e24393"
	I1212 00:33:25.386372  124345 cri.go:89] found id: "1bcb1a5c48edaeda78c1d27f17cce1b209165b9af22bc7735d1657078fb0f1cc"
	I1212 00:33:25.386375  124345 cri.go:89] found id: "4e22f073589e4167bb82c8d86d415e9c1ed9d121f86471cbde61732a2b45d146"
	I1212 00:33:25.386379  124345 cri.go:89] found id: "02c9588db3283f504267742f31da7c57cb5950e15720f4243bf286f0cd58e583"
	I1212 00:33:25.386382  124345 cri.go:89] found id: "dd846a91091143c5ca25f344cb9f2fa60b447f24daca84e2adb65c98007ca3c3"
	I1212 00:33:25.386384  124345 cri.go:89] found id: ""
	I1212 00:33:25.386431  124345 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-492537 -n multinode-492537
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-492537 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.15s)

                                                
                                    
x
+
TestPreload (212.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-134802 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1212 00:42:46.618601   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:42:55.700774   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-134802 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m0.228839353s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-134802 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-134802 image pull gcr.io/k8s-minikube/busybox: (5.728371894s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-134802
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-134802: (7.284021645s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-134802 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-134802 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.118594678s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-134802 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-12 00:45:24.745931649 +0000 UTC m=+4322.837599260
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-134802 -n test-preload-134802
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-134802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-134802 logs -n 25: (1.073837008s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537 sudo cat                                       | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m03_multinode-492537.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt                       | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m02:/home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n                                                                 | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | multinode-492537-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-492537 ssh -n multinode-492537-m02 sudo cat                                   | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | /home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-492537 node stop m03                                                          | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	| node    | multinode-492537 node start                                                             | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC | 12 Dec 24 00:29 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC |                     |
	| stop    | -p multinode-492537                                                                     | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:29 UTC |                     |
	| start   | -p multinode-492537                                                                     | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:31 UTC | 12 Dec 24 00:35 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC |                     |
	| node    | multinode-492537 node delete                                                            | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC | 12 Dec 24 00:35 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-492537 stop                                                                   | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:35 UTC |                     |
	| start   | -p multinode-492537                                                                     | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:37 UTC | 12 Dec 24 00:41 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-492537                                                                | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC |                     |
	| start   | -p multinode-492537-m02                                                                 | multinode-492537-m02 | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-492537-m03                                                                 | multinode-492537-m03 | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC | 12 Dec 24 00:41 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-492537                                                                 | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC |                     |
	| delete  | -p multinode-492537-m03                                                                 | multinode-492537-m03 | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC | 12 Dec 24 00:41 UTC |
	| delete  | -p multinode-492537                                                                     | multinode-492537     | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC | 12 Dec 24 00:41 UTC |
	| start   | -p test-preload-134802                                                                  | test-preload-134802  | jenkins | v1.34.0 | 12 Dec 24 00:41 UTC | 12 Dec 24 00:43 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-134802 image pull                                                          | test-preload-134802  | jenkins | v1.34.0 | 12 Dec 24 00:43 UTC | 12 Dec 24 00:44 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-134802                                                                  | test-preload-134802  | jenkins | v1.34.0 | 12 Dec 24 00:44 UTC | 12 Dec 24 00:44 UTC |
	| start   | -p test-preload-134802                                                                  | test-preload-134802  | jenkins | v1.34.0 | 12 Dec 24 00:44 UTC | 12 Dec 24 00:45 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-134802 image list                                                          | test-preload-134802  | jenkins | v1.34.0 | 12 Dec 24 00:45 UTC | 12 Dec 24 00:45 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:44:08
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:44:08.452922  128914 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:44:08.453041  128914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:44:08.453053  128914 out.go:358] Setting ErrFile to fd 2...
	I1212 00:44:08.453058  128914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:44:08.453216  128914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:44:08.453748  128914 out.go:352] Setting JSON to false
	I1212 00:44:08.454787  128914 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12390,"bootTime":1733951858,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:44:08.454882  128914 start.go:139] virtualization: kvm guest
	I1212 00:44:08.456998  128914 out.go:177] * [test-preload-134802] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:44:08.458305  128914 notify.go:220] Checking for updates...
	I1212 00:44:08.458313  128914 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:44:08.459711  128914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:44:08.461168  128914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:44:08.462454  128914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:44:08.463746  128914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:44:08.465046  128914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:44:08.466818  128914 config.go:182] Loaded profile config "test-preload-134802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 00:44:08.467416  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:44:08.467486  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:44:08.482171  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45685
	I1212 00:44:08.482798  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:44:08.483307  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:44:08.483329  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:44:08.483657  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:44:08.483863  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:08.485728  128914 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:44:08.487048  128914 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:44:08.487342  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:44:08.487385  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:44:08.501656  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I1212 00:44:08.502101  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:44:08.502580  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:44:08.502610  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:44:08.502918  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:44:08.503098  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:08.536704  128914 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:44:08.537979  128914 start.go:297] selected driver: kvm2
	I1212 00:44:08.537995  128914 start.go:901] validating driver "kvm2" against &{Name:test-preload-134802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-134802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:44:08.538123  128914 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:44:08.538809  128914 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:44:08.538908  128914 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:44:08.553617  128914 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:44:08.553974  128914 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:44:08.554006  128914 cni.go:84] Creating CNI manager for ""
	I1212 00:44:08.554061  128914 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:44:08.554134  128914 start.go:340] cluster config:
	{Name:test-preload-134802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-134802 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:44:08.554267  128914 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:44:08.556087  128914 out.go:177] * Starting "test-preload-134802" primary control-plane node in "test-preload-134802" cluster
	I1212 00:44:08.557393  128914 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 00:44:09.274484  128914 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1212 00:44:09.274538  128914 cache.go:56] Caching tarball of preloaded images
	I1212 00:44:09.274704  128914 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 00:44:09.276719  128914 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1212 00:44:09.278137  128914 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 00:44:09.437170  128914 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1212 00:44:27.149870  128914 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 00:44:27.149968  128914 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1212 00:44:28.025481  128914 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1212 00:44:28.025625  128914 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/config.json ...
	I1212 00:44:28.025868  128914 start.go:360] acquireMachinesLock for test-preload-134802: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:44:28.025937  128914 start.go:364] duration metric: took 46.363µs to acquireMachinesLock for "test-preload-134802"
	I1212 00:44:28.025953  128914 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:44:28.025959  128914 fix.go:54] fixHost starting: 
	I1212 00:44:28.026243  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:44:28.026280  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:44:28.040893  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36783
	I1212 00:44:28.041461  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:44:28.041934  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:44:28.041956  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:44:28.042321  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:44:28.042502  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:28.042655  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetState
	I1212 00:44:28.044339  128914 fix.go:112] recreateIfNeeded on test-preload-134802: state=Stopped err=<nil>
	I1212 00:44:28.044373  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	W1212 00:44:28.044539  128914 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:44:28.046849  128914 out.go:177] * Restarting existing kvm2 VM for "test-preload-134802" ...
	I1212 00:44:28.048391  128914 main.go:141] libmachine: (test-preload-134802) Calling .Start
	I1212 00:44:28.048551  128914 main.go:141] libmachine: (test-preload-134802) Ensuring networks are active...
	I1212 00:44:28.049175  128914 main.go:141] libmachine: (test-preload-134802) Ensuring network default is active
	I1212 00:44:28.049495  128914 main.go:141] libmachine: (test-preload-134802) Ensuring network mk-test-preload-134802 is active
	I1212 00:44:28.049947  128914 main.go:141] libmachine: (test-preload-134802) Getting domain xml...
	I1212 00:44:28.050678  128914 main.go:141] libmachine: (test-preload-134802) Creating domain...
	I1212 00:44:29.224581  128914 main.go:141] libmachine: (test-preload-134802) Waiting to get IP...
	I1212 00:44:29.225431  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:29.225784  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:29.225865  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:29.225771  129014 retry.go:31] will retry after 199.794811ms: waiting for machine to come up
	I1212 00:44:29.427488  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:29.427879  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:29.427911  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:29.427836  129014 retry.go:31] will retry after 314.42837ms: waiting for machine to come up
	I1212 00:44:29.744560  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:29.744987  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:29.745013  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:29.744940  129014 retry.go:31] will retry after 453.99ms: waiting for machine to come up
	I1212 00:44:30.200639  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:30.201054  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:30.201092  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:30.201008  129014 retry.go:31] will retry after 562.577489ms: waiting for machine to come up
	I1212 00:44:30.764718  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:30.765123  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:30.765154  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:30.765070  129014 retry.go:31] will retry after 719.560974ms: waiting for machine to come up
	I1212 00:44:31.486026  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:31.486471  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:31.486495  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:31.486418  129014 retry.go:31] will retry after 594.740486ms: waiting for machine to come up
	I1212 00:44:32.082250  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:32.082633  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:32.082662  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:32.082588  129014 retry.go:31] will retry after 1.017881796s: waiting for machine to come up
	I1212 00:44:33.101527  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:33.101907  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:33.101937  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:33.101860  129014 retry.go:31] will retry after 1.428273513s: waiting for machine to come up
	I1212 00:44:34.532429  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:34.532735  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:34.532764  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:34.532680  129014 retry.go:31] will retry after 1.295796801s: waiting for machine to come up
	I1212 00:44:35.830212  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:35.830644  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:35.830671  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:35.830579  129014 retry.go:31] will retry after 1.838971905s: waiting for machine to come up
	I1212 00:44:37.671760  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:37.672158  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:37.672182  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:37.672125  129014 retry.go:31] will retry after 2.896542733s: waiting for machine to come up
	I1212 00:44:40.571687  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:40.572068  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:40.572105  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:40.572023  129014 retry.go:31] will retry after 2.49022646s: waiting for machine to come up
	I1212 00:44:43.063927  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:43.064388  128914 main.go:141] libmachine: (test-preload-134802) DBG | unable to find current IP address of domain test-preload-134802 in network mk-test-preload-134802
	I1212 00:44:43.064412  128914 main.go:141] libmachine: (test-preload-134802) DBG | I1212 00:44:43.064342  129014 retry.go:31] will retry after 4.500157722s: waiting for machine to come up
	I1212 00:44:47.569990  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.570366  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has current primary IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.570396  128914 main.go:141] libmachine: (test-preload-134802) Found IP for machine: 192.168.39.6
	I1212 00:44:47.570410  128914 main.go:141] libmachine: (test-preload-134802) Reserving static IP address...
	I1212 00:44:47.570755  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "test-preload-134802", mac: "52:54:00:91:52:07", ip: "192.168.39.6"} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:47.570778  128914 main.go:141] libmachine: (test-preload-134802) Reserved static IP address: 192.168.39.6
	I1212 00:44:47.570796  128914 main.go:141] libmachine: (test-preload-134802) DBG | skip adding static IP to network mk-test-preload-134802 - found existing host DHCP lease matching {name: "test-preload-134802", mac: "52:54:00:91:52:07", ip: "192.168.39.6"}
	I1212 00:44:47.570810  128914 main.go:141] libmachine: (test-preload-134802) DBG | Getting to WaitForSSH function...
	I1212 00:44:47.570825  128914 main.go:141] libmachine: (test-preload-134802) Waiting for SSH to be available...
	I1212 00:44:47.572746  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.573014  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:47.573045  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.573103  128914 main.go:141] libmachine: (test-preload-134802) DBG | Using SSH client type: external
	I1212 00:44:47.573126  128914 main.go:141] libmachine: (test-preload-134802) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa (-rw-------)
	I1212 00:44:47.573170  128914 main.go:141] libmachine: (test-preload-134802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.6 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:44:47.573184  128914 main.go:141] libmachine: (test-preload-134802) DBG | About to run SSH command:
	I1212 00:44:47.573200  128914 main.go:141] libmachine: (test-preload-134802) DBG | exit 0
	I1212 00:44:47.699462  128914 main.go:141] libmachine: (test-preload-134802) DBG | SSH cmd err, output: <nil>: 
	I1212 00:44:47.699845  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetConfigRaw
	I1212 00:44:47.700513  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetIP
	I1212 00:44:47.702907  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.703238  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:47.703268  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.703547  128914 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/config.json ...
	I1212 00:44:47.703783  128914 machine.go:93] provisionDockerMachine start ...
	I1212 00:44:47.703812  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:47.704019  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:47.706136  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.706470  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:47.706507  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.706614  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:47.706802  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:47.706934  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:47.707075  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:47.707238  128914 main.go:141] libmachine: Using SSH client type: native
	I1212 00:44:47.707430  128914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1212 00:44:47.707441  128914 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 00:44:47.819902  128914 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 00:44:47.819934  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetMachineName
	I1212 00:44:47.820229  128914 buildroot.go:166] provisioning hostname "test-preload-134802"
	I1212 00:44:47.820265  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetMachineName
	I1212 00:44:47.820440  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:47.822969  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.823287  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:47.823315  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.823431  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:47.823625  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:47.823771  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:47.823892  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:47.824020  128914 main.go:141] libmachine: Using SSH client type: native
	I1212 00:44:47.824197  128914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1212 00:44:47.824216  128914 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-134802 && echo "test-preload-134802" | sudo tee /etc/hostname
	I1212 00:44:47.954195  128914 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-134802
	
	I1212 00:44:47.954232  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:47.957056  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.957402  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:47.957431  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:47.957615  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:47.957793  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:47.957927  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:47.958080  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:47.958209  128914 main.go:141] libmachine: Using SSH client type: native
	I1212 00:44:47.958423  128914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1212 00:44:47.958446  128914 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-134802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-134802/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-134802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:44:48.080397  128914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:44:48.080439  128914 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:44:48.080467  128914 buildroot.go:174] setting up certificates
	I1212 00:44:48.080482  128914 provision.go:84] configureAuth start
	I1212 00:44:48.080498  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetMachineName
	I1212 00:44:48.080775  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetIP
	I1212 00:44:48.083214  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.083559  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.083606  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.083779  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.085775  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.086082  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.086104  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.086250  128914 provision.go:143] copyHostCerts
	I1212 00:44:48.086324  128914 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:44:48.086348  128914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:44:48.086432  128914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:44:48.086547  128914 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:44:48.086559  128914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:44:48.086599  128914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:44:48.086683  128914 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:44:48.086693  128914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:44:48.086728  128914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:44:48.086799  128914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.test-preload-134802 san=[127.0.0.1 192.168.39.6 localhost minikube test-preload-134802]
	I1212 00:44:48.245669  128914 provision.go:177] copyRemoteCerts
	I1212 00:44:48.245757  128914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:44:48.245794  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.248322  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.248620  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.248647  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.248815  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:48.248990  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.249126  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:48.249237  128914 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa Username:docker}
	I1212 00:44:48.333907  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:44:48.360514  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1212 00:44:48.386659  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:44:48.412288  128914 provision.go:87] duration metric: took 331.790392ms to configureAuth
	I1212 00:44:48.412316  128914 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:44:48.412483  128914 config.go:182] Loaded profile config "test-preload-134802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 00:44:48.412559  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.415152  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.415481  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.415510  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.415703  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:48.415900  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.416058  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.416150  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:48.416269  128914 main.go:141] libmachine: Using SSH client type: native
	I1212 00:44:48.416438  128914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1212 00:44:48.416452  128914 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:44:48.644678  128914 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:44:48.644709  128914 machine.go:96] duration metric: took 940.906923ms to provisionDockerMachine
	I1212 00:44:48.644722  128914 start.go:293] postStartSetup for "test-preload-134802" (driver="kvm2")
	I1212 00:44:48.644736  128914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:44:48.644751  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:48.645065  128914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:44:48.645105  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.647807  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.648124  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.648154  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.648302  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:48.648490  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.648641  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:48.648761  128914 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa Username:docker}
	I1212 00:44:48.734691  128914 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:44:48.738883  128914 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:44:48.738909  128914 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:44:48.738973  128914 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:44:48.739044  128914 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:44:48.739141  128914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:44:48.748737  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:44:48.772141  128914 start.go:296] duration metric: took 127.401968ms for postStartSetup
	I1212 00:44:48.772181  128914 fix.go:56] duration metric: took 20.746222271s for fixHost
	I1212 00:44:48.772203  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.774582  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.774880  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.774908  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.775014  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:48.775211  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.775368  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.775459  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:48.775585  128914 main.go:141] libmachine: Using SSH client type: native
	I1212 00:44:48.775812  128914 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.6 22 <nil> <nil>}
	I1212 00:44:48.775827  128914 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:44:48.888375  128914 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733964288.863318890
	
	I1212 00:44:48.888401  128914 fix.go:216] guest clock: 1733964288.863318890
	I1212 00:44:48.888411  128914 fix.go:229] Guest: 2024-12-12 00:44:48.86331889 +0000 UTC Remote: 2024-12-12 00:44:48.77218543 +0000 UTC m=+40.357449656 (delta=91.13346ms)
	I1212 00:44:48.888436  128914 fix.go:200] guest clock delta is within tolerance: 91.13346ms
	I1212 00:44:48.888442  128914 start.go:83] releasing machines lock for "test-preload-134802", held for 20.862493725s
	I1212 00:44:48.888481  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:48.888724  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetIP
	I1212 00:44:48.891312  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.891647  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.891688  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.891820  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:48.892273  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:48.892472  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:44:48.892564  128914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:44:48.892618  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.892681  128914 ssh_runner.go:195] Run: cat /version.json
	I1212 00:44:48.892708  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:44:48.895404  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.895426  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.895754  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.895806  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.895835  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:48.895850  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:48.895890  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:48.896051  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.896054  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:44:48.896198  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:48.896239  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:44:48.896354  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:44:48.896350  128914 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa Username:docker}
	I1212 00:44:48.896473  128914 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa Username:docker}
	I1212 00:44:48.977000  128914 ssh_runner.go:195] Run: systemctl --version
	I1212 00:44:49.006385  128914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:44:49.155713  128914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:44:49.161587  128914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:44:49.161650  128914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:44:49.178107  128914 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:44:49.178130  128914 start.go:495] detecting cgroup driver to use...
	I1212 00:44:49.178188  128914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:44:49.194645  128914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:44:49.208735  128914 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:44:49.208798  128914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:44:49.222591  128914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:44:49.236552  128914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:44:49.360871  128914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:44:49.504626  128914 docker.go:233] disabling docker service ...
	I1212 00:44:49.504690  128914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:44:49.519495  128914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:44:49.532497  128914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:44:49.679715  128914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:44:49.799393  128914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:44:49.813549  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:44:49.832201  128914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1212 00:44:49.832275  128914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.842346  128914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:44:49.842414  128914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.852523  128914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.862543  128914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.872506  128914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:44:49.882821  128914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.893066  128914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.910030  128914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:44:49.920168  128914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:44:49.929264  128914 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:44:49.929313  128914 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:44:49.942781  128914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:44:49.951904  128914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:44:50.067522  128914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:44:50.152410  128914 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:44:50.152487  128914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:44:50.157148  128914 start.go:563] Will wait 60s for crictl version
	I1212 00:44:50.157195  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:50.160806  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:44:50.199380  128914 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:44:50.199467  128914 ssh_runner.go:195] Run: crio --version
	I1212 00:44:50.227558  128914 ssh_runner.go:195] Run: crio --version
	I1212 00:44:50.258819  128914 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1212 00:44:50.260129  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetIP
	I1212 00:44:50.262534  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:50.262862  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:44:50.262889  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:44:50.263050  128914 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 00:44:50.267163  128914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:44:50.279263  128914 kubeadm.go:883] updating cluster {Name:test-preload-134802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-134802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:44:50.279373  128914 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1212 00:44:50.279416  128914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:44:50.316672  128914 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1212 00:44:50.316743  128914 ssh_runner.go:195] Run: which lz4
	I1212 00:44:50.320724  128914 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 00:44:50.324910  128914 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:44:50.324942  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1212 00:44:51.912241  128914 crio.go:462] duration metric: took 1.591550202s to copy over tarball
	I1212 00:44:51.912347  128914 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:44:54.245043  128914 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.332661755s)
	I1212 00:44:54.245070  128914 crio.go:469] duration metric: took 2.332794185s to extract the tarball
	I1212 00:44:54.245080  128914 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:44:54.289033  128914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:44:54.339453  128914 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1212 00:44:54.339481  128914 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:44:54.339570  128914 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:44:54.339611  128914 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:54.339619  128914 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:54.339575  128914 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.339653  128914 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.339685  128914 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:54.339575  128914 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:54.339943  128914 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1212 00:44:54.341294  128914 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:54.341301  128914 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.341443  128914 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:54.341451  128914 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:44:54.341455  128914 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1212 00:44:54.341460  128914 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:54.341455  128914 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.341455  128914 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:54.558514  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.577634  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.611674  128914 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1212 00:44:54.611724  128914 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.611770  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.630962  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.631081  128914 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1212 00:44:54.631119  128914 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.631156  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.661664  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:54.661925  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1212 00:44:54.662008  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:54.662248  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:54.665407  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.665479  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.672414  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:54.809109  128914 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1212 00:44:54.809168  128914 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:54.809230  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.835416  128914 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1212 00:44:54.835465  128914 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1212 00:44:54.835504  128914 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1212 00:44:54.835547  128914 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:54.835583  128914 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1212 00:44:54.835634  128914 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:54.835675  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.835591  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.835675  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1212 00:44:54.835513  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.835735  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.838311  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:54.838424  128914 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1212 00:44:54.838465  128914 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:54.838494  128914 ssh_runner.go:195] Run: which crictl
	I1212 00:44:54.849830  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:54.849867  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:54.922634  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1212 00:44:54.922686  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1212 00:44:54.925536  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1212 00:44:54.925627  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1212 00:44:54.953921  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:54.953980  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:54.969308  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:54.969399  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:55.021659  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1212 00:44:55.021787  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 00:44:55.031529  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1212 00:44:55.031638  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1212 00:44:55.031663  128914 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1212 00:44:55.031727  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1212 00:44:55.105255  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1212 00:44:55.105311  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:55.115323  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1212 00:44:55.115375  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1212 00:44:55.115422  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1212 00:44:55.137329  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1212 00:44:56.602681  128914 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:44:57.985251  128914 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.953493301s)
	I1212 00:44:57.985291  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1212 00:44:57.985339  128914 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 00:44:57.985372  128914 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.880027972s)
	I1212 00:44:57.985393  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1212 00:44:57.985451  128914 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1212 00:44:57.985477  128914 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.880186968s)
	I1212 00:44:57.985522  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1212 00:44:57.985548  128914 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.870095354s)
	I1212 00:44:57.985583  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1212 00:44:57.985609  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 00:44:57.985608  128914 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.870255818s)
	I1212 00:44:57.985656  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 00:44:57.985678  128914 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (2.848316978s)
	I1212 00:44:57.985687  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1212 00:44:57.985702  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1212 00:44:57.985769  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1212 00:44:57.985704  128914 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.382993993s)
	I1212 00:44:57.985769  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1212 00:44:58.857977  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1212 00:44:58.858123  128914 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1212 00:44:58.858165  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1212 00:44:58.858184  128914 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 00:44:58.858227  128914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 00:44:58.858238  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1212 00:44:58.858276  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1212 00:44:58.858346  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1212 00:44:58.858404  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1212 00:44:59.503138  128914 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1212 00:44:59.503194  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1212 00:44:59.503240  128914 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 00:44:59.503330  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1212 00:44:59.945529  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1212 00:44:59.945582  128914 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1212 00:44:59.945644  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1212 00:45:00.093119  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1212 00:45:00.093181  128914 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1212 00:45:00.093233  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1212 00:45:00.439006  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1212 00:45:00.439053  128914 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 00:45:00.439119  128914 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1212 00:45:01.183441  128914 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1212 00:45:01.183489  128914 cache_images.go:123] Successfully loaded all cached images
	I1212 00:45:01.183495  128914 cache_images.go:92] duration metric: took 6.844001611s to LoadCachedImages
	I1212 00:45:01.183510  128914 kubeadm.go:934] updating node { 192.168.39.6 8443 v1.24.4 crio true true} ...
	I1212 00:45:01.183647  128914 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-134802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-134802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:45:01.183713  128914 ssh_runner.go:195] Run: crio config
	I1212 00:45:01.241145  128914 cni.go:84] Creating CNI manager for ""
	I1212 00:45:01.241167  128914 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:45:01.241180  128914 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:45:01.241205  128914 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.6 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-134802 NodeName:test-preload-134802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:45:01.241347  128914 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-134802"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:45:01.241415  128914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1212 00:45:01.253991  128914 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:45:01.254069  128914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:45:01.264232  128914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I1212 00:45:01.281013  128914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:45:01.297495  128914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1212 00:45:01.314480  128914 ssh_runner.go:195] Run: grep 192.168.39.6	control-plane.minikube.internal$ /etc/hosts
	I1212 00:45:01.318417  128914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:45:01.331240  128914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:45:01.449006  128914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:45:01.467575  128914 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802 for IP: 192.168.39.6
	I1212 00:45:01.467619  128914 certs.go:194] generating shared ca certs ...
	I1212 00:45:01.467641  128914 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:45:01.467819  128914 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:45:01.467863  128914 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:45:01.467874  128914 certs.go:256] generating profile certs ...
	I1212 00:45:01.467962  128914 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/client.key
	I1212 00:45:01.468017  128914 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/apiserver.key.84a44dd4
	I1212 00:45:01.468058  128914 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/proxy-client.key
	I1212 00:45:01.468173  128914 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:45:01.468252  128914 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:45:01.468265  128914 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:45:01.468301  128914 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:45:01.468323  128914 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:45:01.468347  128914 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:45:01.468387  128914 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:45:01.469096  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:45:01.509296  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:45:01.534871  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:45:01.569993  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:45:01.595674  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 00:45:01.621582  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:45:01.668941  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:45:01.706281  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 00:45:01.730141  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:45:01.753197  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:45:01.777068  128914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:45:01.800747  128914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:45:01.817892  128914 ssh_runner.go:195] Run: openssl version
	I1212 00:45:01.824071  128914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:45:01.834828  128914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:45:01.839418  128914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:45:01.839461  128914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:45:01.845373  128914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:45:01.855876  128914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:45:01.866335  128914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:45:01.870784  128914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:45:01.870831  128914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:45:01.876402  128914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:45:01.887238  128914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:45:01.898412  128914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:45:01.902886  128914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:45:01.902951  128914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:45:01.908628  128914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:45:01.919207  128914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:45:01.924987  128914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:45:01.932432  128914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:45:01.939091  128914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:45:01.945314  128914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:45:01.951302  128914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:45:01.957304  128914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:45:01.963226  128914 kubeadm.go:392] StartCluster: {Name:test-preload-134802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-134802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:45:01.963318  128914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:45:01.963358  128914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:45:02.001927  128914 cri.go:89] found id: ""
	I1212 00:45:02.002003  128914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:45:02.012159  128914 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 00:45:02.012183  128914 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 00:45:02.012220  128914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 00:45:02.021900  128914 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:45:02.022338  128914 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-134802" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:45:02.022457  128914 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-134802" cluster setting kubeconfig missing "test-preload-134802" context setting]
	I1212 00:45:02.022747  128914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:45:02.023342  128914 kapi.go:59] client config for test-preload-134802: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:45:02.024061  128914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 00:45:02.033492  128914 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.6
	I1212 00:45:02.033529  128914 kubeadm.go:1160] stopping kube-system containers ...
	I1212 00:45:02.033541  128914 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 00:45:02.033578  128914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:45:02.071622  128914 cri.go:89] found id: ""
	I1212 00:45:02.071683  128914 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 00:45:02.087530  128914 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:45:02.097052  128914 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:45:02.097080  128914 kubeadm.go:157] found existing configuration files:
	
	I1212 00:45:02.097147  128914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:45:02.105971  128914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:45:02.106019  128914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:45:02.115014  128914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:45:02.123852  128914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:45:02.123901  128914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:45:02.132918  128914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:45:02.141873  128914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:45:02.141917  128914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:45:02.150981  128914 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:45:02.159630  128914 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:45:02.159676  128914 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:45:02.168596  128914 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:45:02.177599  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:45:02.269684  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:45:02.920074  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:45:03.183339  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:45:03.256368  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:45:03.335029  128914 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:45:03.335142  128914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:45:03.836094  128914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:45:04.336031  128914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:45:04.364870  128914 api_server.go:72] duration metric: took 1.029841164s to wait for apiserver process to appear ...
	I1212 00:45:04.364908  128914 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:45:04.364936  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:04.365489  128914 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I1212 00:45:04.865037  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:04.865644  128914 api_server.go:269] stopped: https://192.168.39.6:8443/healthz: Get "https://192.168.39.6:8443/healthz": dial tcp 192.168.39.6:8443: connect: connection refused
	I1212 00:45:05.365171  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:08.318932  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:45:08.318961  128914 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:45:08.318975  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:08.371907  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:45:08.371953  128914 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:45:08.371974  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:08.401216  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 00:45:08.401271  128914 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 00:45:08.865038  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:08.870779  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:45:08.870830  128914 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:45:09.365827  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:09.374320  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 00:45:09.374350  128914 api_server.go:103] status: https://192.168.39.6:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 00:45:09.865985  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:09.871391  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I1212 00:45:09.877772  128914 api_server.go:141] control plane version: v1.24.4
	I1212 00:45:09.877797  128914 api_server.go:131] duration metric: took 5.512881906s to wait for apiserver health ...
	I1212 00:45:09.877806  128914 cni.go:84] Creating CNI manager for ""
	I1212 00:45:09.877813  128914 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:45:09.879580  128914 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 00:45:09.881160  128914 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 00:45:09.893379  128914 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 00:45:09.911969  128914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:45:09.912064  128914 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 00:45:09.912087  128914 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 00:45:09.920549  128914 system_pods.go:59] 7 kube-system pods found
	I1212 00:45:09.920584  128914 system_pods.go:61] "coredns-6d4b75cb6d-djdb5" [d42c3141-50e3-42cf-97f1-1882639f83d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 00:45:09.920591  128914 system_pods.go:61] "etcd-test-preload-134802" [98ea6866-a108-4c5d-a6af-ef43f1d6d1db] Running
	I1212 00:45:09.920596  128914 system_pods.go:61] "kube-apiserver-test-preload-134802" [9bcd91a2-ab3b-45c4-aa58-8def7e1c6a3e] Running
	I1212 00:45:09.920601  128914 system_pods.go:61] "kube-controller-manager-test-preload-134802" [56239c0f-cdcd-44d6-90c2-d96451f1ac77] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 00:45:09.920610  128914 system_pods.go:61] "kube-proxy-m5pw7" [d1551f28-3107-41ba-b271-29272d461671] Running
	I1212 00:45:09.920616  128914 system_pods.go:61] "kube-scheduler-test-preload-134802" [d4f63e55-49ec-4961-92a6-26243c91a1bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 00:45:09.920621  128914 system_pods.go:61] "storage-provisioner" [6642d1e0-ecff-4830-897e-bbeaba84df2b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 00:45:09.920629  128914 system_pods.go:74] duration metric: took 8.63854ms to wait for pod list to return data ...
	I1212 00:45:09.920636  128914 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:45:09.923928  128914 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:45:09.923959  128914 node_conditions.go:123] node cpu capacity is 2
	I1212 00:45:09.923972  128914 node_conditions.go:105] duration metric: took 3.328455ms to run NodePressure ...
	I1212 00:45:09.923993  128914 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 00:45:10.186301  128914 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 00:45:10.192361  128914 kubeadm.go:739] kubelet initialised
	I1212 00:45:10.192386  128914 kubeadm.go:740] duration metric: took 6.056474ms waiting for restarted kubelet to initialise ...
	I1212 00:45:10.192397  128914 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:45:10.196900  128914 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:10.202305  128914 pod_ready.go:98] node "test-preload-134802" hosting pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.202332  128914 pod_ready.go:82] duration metric: took 5.403211ms for pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace to be "Ready" ...
	E1212 00:45:10.202341  128914 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-134802" hosting pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.202347  128914 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:10.209503  128914 pod_ready.go:98] node "test-preload-134802" hosting pod "etcd-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.209522  128914 pod_ready.go:82] duration metric: took 7.165911ms for pod "etcd-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	E1212 00:45:10.209530  128914 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-134802" hosting pod "etcd-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.209536  128914 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:10.213646  128914 pod_ready.go:98] node "test-preload-134802" hosting pod "kube-apiserver-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.213669  128914 pod_ready.go:82] duration metric: took 4.125057ms for pod "kube-apiserver-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	E1212 00:45:10.213677  128914 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-134802" hosting pod "kube-apiserver-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.213683  128914 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:10.320981  128914 pod_ready.go:98] node "test-preload-134802" hosting pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.321034  128914 pod_ready.go:82] duration metric: took 107.338791ms for pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	E1212 00:45:10.321053  128914 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-134802" hosting pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.321067  128914 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-m5pw7" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:10.717138  128914 pod_ready.go:98] node "test-preload-134802" hosting pod "kube-proxy-m5pw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.717171  128914 pod_ready.go:82] duration metric: took 396.084909ms for pod "kube-proxy-m5pw7" in "kube-system" namespace to be "Ready" ...
	E1212 00:45:10.717185  128914 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-134802" hosting pod "kube-proxy-m5pw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:10.717193  128914 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:11.115943  128914 pod_ready.go:98] node "test-preload-134802" hosting pod "kube-scheduler-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:11.115971  128914 pod_ready.go:82] duration metric: took 398.770342ms for pod "kube-scheduler-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	E1212 00:45:11.115981  128914 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-134802" hosting pod "kube-scheduler-test-preload-134802" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:11.115988  128914 pod_ready.go:39] duration metric: took 923.580438ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:45:11.116007  128914 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 00:45:11.129170  128914 ops.go:34] apiserver oom_adj: -16
	I1212 00:45:11.129191  128914 kubeadm.go:597] duration metric: took 9.117001975s to restartPrimaryControlPlane
	I1212 00:45:11.129199  128914 kubeadm.go:394] duration metric: took 9.1659826s to StartCluster
	I1212 00:45:11.129217  128914 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:45:11.129296  128914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:45:11.129971  128914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:45:11.130198  128914 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.6 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:45:11.130276  128914 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 00:45:11.130372  128914 addons.go:69] Setting storage-provisioner=true in profile "test-preload-134802"
	I1212 00:45:11.130391  128914 addons.go:69] Setting default-storageclass=true in profile "test-preload-134802"
	I1212 00:45:11.130426  128914 config.go:182] Loaded profile config "test-preload-134802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1212 00:45:11.130445  128914 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-134802"
	I1212 00:45:11.130396  128914 addons.go:234] Setting addon storage-provisioner=true in "test-preload-134802"
	W1212 00:45:11.130506  128914 addons.go:243] addon storage-provisioner should already be in state true
	I1212 00:45:11.130543  128914 host.go:66] Checking if "test-preload-134802" exists ...
	I1212 00:45:11.130760  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:45:11.130803  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:45:11.130867  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:45:11.130882  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:45:11.131793  128914 out.go:177] * Verifying Kubernetes components...
	I1212 00:45:11.133133  128914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:45:11.146283  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34489
	I1212 00:45:11.146390  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I1212 00:45:11.146817  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:45:11.146884  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:45:11.147416  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:45:11.147436  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:45:11.147553  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:45:11.147577  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:45:11.147767  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:45:11.147992  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:45:11.148192  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetState
	I1212 00:45:11.148261  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:45:11.148303  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:45:11.150525  128914 kapi.go:59] client config for test-preload-134802: &rest.Config{Host:"https://192.168.39.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/client.crt", KeyFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/profiles/test-preload-134802/client.key", CAFile:"/home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(ni
l), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243c2e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 00:45:11.150897  128914 addons.go:234] Setting addon default-storageclass=true in "test-preload-134802"
	W1212 00:45:11.150943  128914 addons.go:243] addon default-storageclass should already be in state true
	I1212 00:45:11.150983  128914 host.go:66] Checking if "test-preload-134802" exists ...
	I1212 00:45:11.151382  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:45:11.151427  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:45:11.164064  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33151
	I1212 00:45:11.164629  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:45:11.165185  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:45:11.165208  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:45:11.165577  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:45:11.165759  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetState
	I1212 00:45:11.166167  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I1212 00:45:11.166632  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:45:11.167024  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:45:11.167051  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:45:11.167432  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:45:11.167462  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:45:11.168029  128914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:45:11.168079  128914 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:45:11.169360  128914 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:45:11.170821  128914 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:45:11.170836  128914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 00:45:11.170853  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:45:11.173374  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:45:11.173743  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:45:11.173765  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:45:11.173927  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:45:11.174087  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:45:11.174240  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:45:11.174364  128914 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa Username:docker}
	I1212 00:45:11.218932  128914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35123
	I1212 00:45:11.219434  128914 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:45:11.219919  128914 main.go:141] libmachine: Using API Version  1
	I1212 00:45:11.219941  128914 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:45:11.220288  128914 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:45:11.220482  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetState
	I1212 00:45:11.221917  128914 main.go:141] libmachine: (test-preload-134802) Calling .DriverName
	I1212 00:45:11.222128  128914 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 00:45:11.222148  128914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 00:45:11.222167  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHHostname
	I1212 00:45:11.224935  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:45:11.225384  128914 main.go:141] libmachine: (test-preload-134802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:52:07", ip: ""} in network mk-test-preload-134802: {Iface:virbr1 ExpiryTime:2024-12-12 01:44:39 +0000 UTC Type:0 Mac:52:54:00:91:52:07 Iaid: IPaddr:192.168.39.6 Prefix:24 Hostname:test-preload-134802 Clientid:01:52:54:00:91:52:07}
	I1212 00:45:11.225406  128914 main.go:141] libmachine: (test-preload-134802) DBG | domain test-preload-134802 has defined IP address 192.168.39.6 and MAC address 52:54:00:91:52:07 in network mk-test-preload-134802
	I1212 00:45:11.225566  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHPort
	I1212 00:45:11.225724  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHKeyPath
	I1212 00:45:11.225839  128914 main.go:141] libmachine: (test-preload-134802) Calling .GetSSHUsername
	I1212 00:45:11.226070  128914 sshutil.go:53] new ssh client: &{IP:192.168.39.6 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/test-preload-134802/id_rsa Username:docker}
	I1212 00:45:11.320732  128914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:45:11.341747  128914 node_ready.go:35] waiting up to 6m0s for node "test-preload-134802" to be "Ready" ...
	I1212 00:45:11.457018  128914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 00:45:11.470925  128914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 00:45:12.471566  128914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.014508399s)
	I1212 00:45:12.471646  128914 main.go:141] libmachine: Making call to close driver server
	I1212 00:45:12.471662  128914 main.go:141] libmachine: (test-preload-134802) Calling .Close
	I1212 00:45:12.471954  128914 main.go:141] libmachine: (test-preload-134802) DBG | Closing plugin on server side
	I1212 00:45:12.471958  128914 main.go:141] libmachine: Successfully made call to close driver server
	I1212 00:45:12.471983  128914 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 00:45:12.471999  128914 main.go:141] libmachine: Making call to close driver server
	I1212 00:45:12.472009  128914 main.go:141] libmachine: (test-preload-134802) Calling .Close
	I1212 00:45:12.472233  128914 main.go:141] libmachine: Successfully made call to close driver server
	I1212 00:45:12.472249  128914 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 00:45:12.496732  128914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.025764126s)
	I1212 00:45:12.496788  128914 main.go:141] libmachine: Making call to close driver server
	I1212 00:45:12.496801  128914 main.go:141] libmachine: (test-preload-134802) Calling .Close
	I1212 00:45:12.497134  128914 main.go:141] libmachine: Successfully made call to close driver server
	I1212 00:45:12.497156  128914 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 00:45:12.497162  128914 main.go:141] libmachine: (test-preload-134802) DBG | Closing plugin on server side
	I1212 00:45:12.497169  128914 main.go:141] libmachine: Making call to close driver server
	I1212 00:45:12.497210  128914 main.go:141] libmachine: (test-preload-134802) Calling .Close
	I1212 00:45:12.497430  128914 main.go:141] libmachine: Successfully made call to close driver server
	I1212 00:45:12.497445  128914 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 00:45:12.497456  128914 main.go:141] libmachine: (test-preload-134802) DBG | Closing plugin on server side
	I1212 00:45:12.510827  128914 main.go:141] libmachine: Making call to close driver server
	I1212 00:45:12.510850  128914 main.go:141] libmachine: (test-preload-134802) Calling .Close
	I1212 00:45:12.511099  128914 main.go:141] libmachine: Successfully made call to close driver server
	I1212 00:45:12.511119  128914 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 00:45:12.511142  128914 main.go:141] libmachine: (test-preload-134802) DBG | Closing plugin on server side
	I1212 00:45:12.513963  128914 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1212 00:45:12.515133  128914 addons.go:510] duration metric: took 1.384870513s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1212 00:45:13.345598  128914 node_ready.go:53] node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:15.845627  128914 node_ready.go:53] node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:17.846885  128914 node_ready.go:53] node "test-preload-134802" has status "Ready":"False"
	I1212 00:45:18.845602  128914 node_ready.go:49] node "test-preload-134802" has status "Ready":"True"
	I1212 00:45:18.845626  128914 node_ready.go:38] duration metric: took 7.503847682s for node "test-preload-134802" to be "Ready" ...
	I1212 00:45:18.845635  128914 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:45:18.851484  128914 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:18.856521  128914 pod_ready.go:93] pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace has status "Ready":"True"
	I1212 00:45:18.856541  128914 pod_ready.go:82] duration metric: took 5.031226ms for pod "coredns-6d4b75cb6d-djdb5" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:18.856549  128914 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:18.860981  128914 pod_ready.go:93] pod "etcd-test-preload-134802" in "kube-system" namespace has status "Ready":"True"
	I1212 00:45:18.861010  128914 pod_ready.go:82] duration metric: took 4.445598ms for pod "etcd-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:18.861019  128914 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:19.367955  128914 pod_ready.go:93] pod "kube-apiserver-test-preload-134802" in "kube-system" namespace has status "Ready":"True"
	I1212 00:45:19.367979  128914 pod_ready.go:82] duration metric: took 506.954211ms for pod "kube-apiserver-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:19.367989  128914 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:21.380237  128914 pod_ready.go:103] pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace has status "Ready":"False"
	I1212 00:45:23.874983  128914 pod_ready.go:93] pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace has status "Ready":"True"
	I1212 00:45:23.875011  128914 pod_ready.go:82] duration metric: took 4.507015966s for pod "kube-controller-manager-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:23.875021  128914 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m5pw7" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:23.879510  128914 pod_ready.go:93] pod "kube-proxy-m5pw7" in "kube-system" namespace has status "Ready":"True"
	I1212 00:45:23.879530  128914 pod_ready.go:82] duration metric: took 4.504246ms for pod "kube-proxy-m5pw7" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:23.879538  128914 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:23.883372  128914 pod_ready.go:93] pod "kube-scheduler-test-preload-134802" in "kube-system" namespace has status "Ready":"True"
	I1212 00:45:23.883396  128914 pod_ready.go:82] duration metric: took 3.846621ms for pod "kube-scheduler-test-preload-134802" in "kube-system" namespace to be "Ready" ...
	I1212 00:45:23.883405  128914 pod_ready.go:39] duration metric: took 5.03776103s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 00:45:23.883420  128914 api_server.go:52] waiting for apiserver process to appear ...
	I1212 00:45:23.883471  128914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:45:23.898111  128914 api_server.go:72] duration metric: took 12.767883581s to wait for apiserver process to appear ...
	I1212 00:45:23.898134  128914 api_server.go:88] waiting for apiserver healthz status ...
	I1212 00:45:23.898150  128914 api_server.go:253] Checking apiserver healthz at https://192.168.39.6:8443/healthz ...
	I1212 00:45:23.902815  128914 api_server.go:279] https://192.168.39.6:8443/healthz returned 200:
	ok
	I1212 00:45:23.903636  128914 api_server.go:141] control plane version: v1.24.4
	I1212 00:45:23.903653  128914 api_server.go:131] duration metric: took 5.514293ms to wait for apiserver health ...
	I1212 00:45:23.903662  128914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 00:45:23.908129  128914 system_pods.go:59] 7 kube-system pods found
	I1212 00:45:23.908151  128914 system_pods.go:61] "coredns-6d4b75cb6d-djdb5" [d42c3141-50e3-42cf-97f1-1882639f83d8] Running
	I1212 00:45:23.908155  128914 system_pods.go:61] "etcd-test-preload-134802" [98ea6866-a108-4c5d-a6af-ef43f1d6d1db] Running
	I1212 00:45:23.908159  128914 system_pods.go:61] "kube-apiserver-test-preload-134802" [9bcd91a2-ab3b-45c4-aa58-8def7e1c6a3e] Running
	I1212 00:45:23.908163  128914 system_pods.go:61] "kube-controller-manager-test-preload-134802" [56239c0f-cdcd-44d6-90c2-d96451f1ac77] Running
	I1212 00:45:23.908166  128914 system_pods.go:61] "kube-proxy-m5pw7" [d1551f28-3107-41ba-b271-29272d461671] Running
	I1212 00:45:23.908169  128914 system_pods.go:61] "kube-scheduler-test-preload-134802" [d4f63e55-49ec-4961-92a6-26243c91a1bb] Running
	I1212 00:45:23.908171  128914 system_pods.go:61] "storage-provisioner" [6642d1e0-ecff-4830-897e-bbeaba84df2b] Running
	I1212 00:45:23.908177  128914 system_pods.go:74] duration metric: took 4.509555ms to wait for pod list to return data ...
	I1212 00:45:23.908186  128914 default_sa.go:34] waiting for default service account to be created ...
	I1212 00:45:24.045372  128914 default_sa.go:45] found service account: "default"
	I1212 00:45:24.045398  128914 default_sa.go:55] duration metric: took 137.205127ms for default service account to be created ...
	I1212 00:45:24.045407  128914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 00:45:24.248424  128914 system_pods.go:86] 7 kube-system pods found
	I1212 00:45:24.248453  128914 system_pods.go:89] "coredns-6d4b75cb6d-djdb5" [d42c3141-50e3-42cf-97f1-1882639f83d8] Running
	I1212 00:45:24.248458  128914 system_pods.go:89] "etcd-test-preload-134802" [98ea6866-a108-4c5d-a6af-ef43f1d6d1db] Running
	I1212 00:45:24.248462  128914 system_pods.go:89] "kube-apiserver-test-preload-134802" [9bcd91a2-ab3b-45c4-aa58-8def7e1c6a3e] Running
	I1212 00:45:24.248465  128914 system_pods.go:89] "kube-controller-manager-test-preload-134802" [56239c0f-cdcd-44d6-90c2-d96451f1ac77] Running
	I1212 00:45:24.248469  128914 system_pods.go:89] "kube-proxy-m5pw7" [d1551f28-3107-41ba-b271-29272d461671] Running
	I1212 00:45:24.248472  128914 system_pods.go:89] "kube-scheduler-test-preload-134802" [d4f63e55-49ec-4961-92a6-26243c91a1bb] Running
	I1212 00:45:24.248475  128914 system_pods.go:89] "storage-provisioner" [6642d1e0-ecff-4830-897e-bbeaba84df2b] Running
	I1212 00:45:24.248482  128914 system_pods.go:126] duration metric: took 203.068602ms to wait for k8s-apps to be running ...
	I1212 00:45:24.248489  128914 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 00:45:24.248531  128914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:45:24.262902  128914 system_svc.go:56] duration metric: took 14.403064ms WaitForService to wait for kubelet
	I1212 00:45:24.262929  128914 kubeadm.go:582] duration metric: took 13.132704694s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:45:24.262955  128914 node_conditions.go:102] verifying NodePressure condition ...
	I1212 00:45:24.446889  128914 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 00:45:24.446915  128914 node_conditions.go:123] node cpu capacity is 2
	I1212 00:45:24.446925  128914 node_conditions.go:105] duration metric: took 183.965433ms to run NodePressure ...
	I1212 00:45:24.446937  128914 start.go:241] waiting for startup goroutines ...
	I1212 00:45:24.446944  128914 start.go:246] waiting for cluster config update ...
	I1212 00:45:24.446953  128914 start.go:255] writing updated cluster config ...
	I1212 00:45:24.447202  128914 ssh_runner.go:195] Run: rm -f paused
	I1212 00:45:24.493128  128914 start.go:600] kubectl: 1.32.0, cluster: 1.24.4 (minor skew: 8)
	I1212 00:45:24.495224  128914 out.go:201] 
	W1212 00:45:24.496821  128914 out.go:270] ! /usr/local/bin/kubectl is version 1.32.0, which may have incompatibilities with Kubernetes 1.24.4.
	I1212 00:45:24.498344  128914 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1212 00:45:24.499604  128914 out.go:177] * Done! kubectl is now configured to use "test-preload-134802" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.448724092Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07cfddee-d347-44e5-ba9b-12e41fe16f92 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.450066264Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86fc0082-c8dc-4172-ba9a-f298e959fd7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.450615610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733964325450585607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86fc0082-c8dc-4172-ba9a-f298e959fd7d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.451196094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebc650e0-3720-4eec-b5e1-c6199539675a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.451266928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebc650e0-3720-4eec-b5e1-c6199539675a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.451462861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d40ef1e9ae705b2f36579555948b0511c8dbddb262c26debdd3e285c14bc87,PodSandboxId:82af4f8d6aca4b4d9c0db3aefb218d55897d0fb41fedee8049acf4fe5cb14384,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733964317583385690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-djdb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42c3141-50e3-42cf-97f1-1882639f83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 4ed9f1de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd6ff4627e13d5d02205eb57d8f235a496582df73c458e8d6c815a37cd6c515,PodSandboxId:1793a87a4b2b47685f8da9983c9077da34edee683bbf3d6d6fba46e817637b11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733964310381607700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6642d1e0-ecff-4830-897e-bbeaba84df2b,},Annotations:map[string]string{io.kubernetes.container.hash: 94424e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bfc654c074ab4e46fa63fbdd1e67cb3539130919be7632080b95b1aab9debd,PodSandboxId:de22e4484738ff1599e3cf17ee4abc7bb0a89b004d0e032e825372531ab511b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733964310134112586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5pw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1
551f28-3107-41ba-b271-29272d461671,},Annotations:map[string]string{io.kubernetes.container.hash: ee565a25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769ff943593df6c18c27c18e6c84df3218fe7692abc643c63773483170735d3,PodSandboxId:dfef157ceb194b2fac16a33d7b41c1848dd8c3d57ffb02ec710f6ea3a179190b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733964304161977079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b3d994ee80c7c52908d8f555e9f214,},Anno
tations:map[string]string{io.kubernetes.container.hash: af9757,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5f5a25dd83bf851abc56dbb88a821573f1d48456411e891c8099d035341ff2,PodSandboxId:14e33e5755b94a051c7152164004f68554aeaba49bc7c37731609b7f198fb48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733964304107465866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d221582497b883ffd1699b9
8dafa31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb8d47df5a621d4b3199bc984b19315c2a0097014c4ba811c54bdbfda365207,PodSandboxId:d102bdd28a051e011ed4e0166e2d7a9a7da9ef5ee404e7235b2435ddc81bd16b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733964304059982916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06f61a739063da64a7047b950fea0b0,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2480975830f7f5feb7699bd51f9acb675615635ea040e89b36d0e53297d5fd16,PodSandboxId:11360547e801d47705ceebc0adb1f674a403e4b5a620d4d12fdd2dc28c717a43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733964304011030613,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c527b3d6240739e5536f8b88567dee4,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebc650e0-3720-4eec-b5e1-c6199539675a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.491256635Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7e3e34f-7e62-4004-82f1-b53f9a84d877 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.491349738Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7e3e34f-7e62-4004-82f1-b53f9a84d877 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.492731598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=162e42a4-70ab-4b56-83be-81abdbf52230 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.493287827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733964325493263577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=162e42a4-70ab-4b56-83be-81abdbf52230 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.494342548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7691c890-28f5-4ffb-b370-c4ec8a896ffe name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.494455089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7691c890-28f5-4ffb-b370-c4ec8a896ffe name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.494611778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d40ef1e9ae705b2f36579555948b0511c8dbddb262c26debdd3e285c14bc87,PodSandboxId:82af4f8d6aca4b4d9c0db3aefb218d55897d0fb41fedee8049acf4fe5cb14384,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733964317583385690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-djdb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42c3141-50e3-42cf-97f1-1882639f83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 4ed9f1de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd6ff4627e13d5d02205eb57d8f235a496582df73c458e8d6c815a37cd6c515,PodSandboxId:1793a87a4b2b47685f8da9983c9077da34edee683bbf3d6d6fba46e817637b11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733964310381607700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6642d1e0-ecff-4830-897e-bbeaba84df2b,},Annotations:map[string]string{io.kubernetes.container.hash: 94424e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bfc654c074ab4e46fa63fbdd1e67cb3539130919be7632080b95b1aab9debd,PodSandboxId:de22e4484738ff1599e3cf17ee4abc7bb0a89b004d0e032e825372531ab511b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733964310134112586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5pw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1
551f28-3107-41ba-b271-29272d461671,},Annotations:map[string]string{io.kubernetes.container.hash: ee565a25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769ff943593df6c18c27c18e6c84df3218fe7692abc643c63773483170735d3,PodSandboxId:dfef157ceb194b2fac16a33d7b41c1848dd8c3d57ffb02ec710f6ea3a179190b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733964304161977079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b3d994ee80c7c52908d8f555e9f214,},Anno
tations:map[string]string{io.kubernetes.container.hash: af9757,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5f5a25dd83bf851abc56dbb88a821573f1d48456411e891c8099d035341ff2,PodSandboxId:14e33e5755b94a051c7152164004f68554aeaba49bc7c37731609b7f198fb48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733964304107465866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d221582497b883ffd1699b9
8dafa31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb8d47df5a621d4b3199bc984b19315c2a0097014c4ba811c54bdbfda365207,PodSandboxId:d102bdd28a051e011ed4e0166e2d7a9a7da9ef5ee404e7235b2435ddc81bd16b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733964304059982916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06f61a739063da64a7047b950fea0b0,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2480975830f7f5feb7699bd51f9acb675615635ea040e89b36d0e53297d5fd16,PodSandboxId:11360547e801d47705ceebc0adb1f674a403e4b5a620d4d12fdd2dc28c717a43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733964304011030613,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c527b3d6240739e5536f8b88567dee4,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7691c890-28f5-4ffb-b370-c4ec8a896ffe name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.529254685Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b894b16d-88e6-4464-8a44-a200ae58cf83 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.529343635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b894b16d-88e6-4464-8a44-a200ae58cf83 name=/runtime.v1.RuntimeService/Version
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.530230711Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58c9d656-f7f1-4d62-b0ff-4276ad38c936 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.530770414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733964325530747828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58c9d656-f7f1-4d62-b0ff-4276ad38c936 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.531223191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a86ffc4-675a-47f4-a10c-93cdbadd3967 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.531289278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a86ffc4-675a-47f4-a10c-93cdbadd3967 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.531505918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d40ef1e9ae705b2f36579555948b0511c8dbddb262c26debdd3e285c14bc87,PodSandboxId:82af4f8d6aca4b4d9c0db3aefb218d55897d0fb41fedee8049acf4fe5cb14384,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733964317583385690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-djdb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42c3141-50e3-42cf-97f1-1882639f83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 4ed9f1de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd6ff4627e13d5d02205eb57d8f235a496582df73c458e8d6c815a37cd6c515,PodSandboxId:1793a87a4b2b47685f8da9983c9077da34edee683bbf3d6d6fba46e817637b11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733964310381607700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6642d1e0-ecff-4830-897e-bbeaba84df2b,},Annotations:map[string]string{io.kubernetes.container.hash: 94424e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bfc654c074ab4e46fa63fbdd1e67cb3539130919be7632080b95b1aab9debd,PodSandboxId:de22e4484738ff1599e3cf17ee4abc7bb0a89b004d0e032e825372531ab511b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733964310134112586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5pw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1
551f28-3107-41ba-b271-29272d461671,},Annotations:map[string]string{io.kubernetes.container.hash: ee565a25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769ff943593df6c18c27c18e6c84df3218fe7692abc643c63773483170735d3,PodSandboxId:dfef157ceb194b2fac16a33d7b41c1848dd8c3d57ffb02ec710f6ea3a179190b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733964304161977079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b3d994ee80c7c52908d8f555e9f214,},Anno
tations:map[string]string{io.kubernetes.container.hash: af9757,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5f5a25dd83bf851abc56dbb88a821573f1d48456411e891c8099d035341ff2,PodSandboxId:14e33e5755b94a051c7152164004f68554aeaba49bc7c37731609b7f198fb48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733964304107465866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d221582497b883ffd1699b9
8dafa31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb8d47df5a621d4b3199bc984b19315c2a0097014c4ba811c54bdbfda365207,PodSandboxId:d102bdd28a051e011ed4e0166e2d7a9a7da9ef5ee404e7235b2435ddc81bd16b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733964304059982916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06f61a739063da64a7047b950fea0b0,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2480975830f7f5feb7699bd51f9acb675615635ea040e89b36d0e53297d5fd16,PodSandboxId:11360547e801d47705ceebc0adb1f674a403e4b5a620d4d12fdd2dc28c717a43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733964304011030613,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c527b3d6240739e5536f8b88567dee4,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a86ffc4-675a-47f4-a10c-93cdbadd3967 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.539323856Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=e39eded0-bc81-49c0-bbaf-74d45be1cbed name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.539676610Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:82af4f8d6aca4b4d9c0db3aefb218d55897d0fb41fedee8049acf4fe5cb14384,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-djdb5,Uid:d42c3141-50e3-42cf-97f1-1882639f83d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964317343966661,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-djdb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42c3141-50e3-42cf-97f1-1882639f83d8,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T00:45:09.328778617Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1793a87a4b2b47685f8da9983c9077da34edee683bbf3d6d6fba46e817637b11,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6642d1e0-ecff-4830-897e-bbeaba84df2b,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964310237299091,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6642d1e0-ecff-4830-897e-bbeaba84df2b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-12T00:45:09.328803510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de22e4484738ff1599e3cf17ee4abc7bb0a89b004d0e032e825372531ab511b7,Metadata:&PodSandboxMetadata{Name:kube-proxy-m5pw7,Uid:d1551f28-3107-41ba-b271-29272d461671,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964309939526613,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m5pw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1551f28-3107-41ba-b271-29272d461671,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T00:45:09.328801580Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:11360547e801d47705ceebc0adb1f674a403e4b5a620d4d12fdd2dc28c717a43,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-134802,Uid:7c527b3
d6240739e5536f8b88567dee4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964303871522434,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c527b3d6240739e5536f8b88567dee4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7c527b3d6240739e5536f8b88567dee4,kubernetes.io/config.seen: 2024-12-12T00:45:03.319200789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14e33e5755b94a051c7152164004f68554aeaba49bc7c37731609b7f198fb48c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-134802,Uid:88d221582497b883ffd1699b98dafa31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964303868897639,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-134802,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d221582497b883ffd1699b98dafa31,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 88d221582497b883ffd1699b98dafa31,kubernetes.io/config.seen: 2024-12-12T00:45:03.319199722Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dfef157ceb194b2fac16a33d7b41c1848dd8c3d57ffb02ec710f6ea3a179190b,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-134802,Uid:70b3d994ee80c7c52908d8f555e9f214,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964303863760603,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b3d994ee80c7c52908d8f555e9f214,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.6:2379,kubernetes.io/config.hash: 70b3d994ee80c7c52908d8f555e9f214,kubernetes.io/config.seen: 2024-12-12T00:4
5:03.326056275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d102bdd28a051e011ed4e0166e2d7a9a7da9ef5ee404e7235b2435ddc81bd16b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-134802,Uid:b06f61a739063da64a7047b950fea0b0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733964303862967220,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06f61a739063da64a7047b950fea0b0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.6:8443,kubernetes.io/config.hash: b06f61a739063da64a7047b950fea0b0,kubernetes.io/config.seen: 2024-12-12T00:45:03.319163660Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e39eded0-bc81-49c0-bbaf-74d45be1cbed name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.540277395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c7ab4f1-37c8-447a-af44-c7648d986cb2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.540347918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c7ab4f1-37c8-447a-af44-c7648d986cb2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 00:45:25 test-preload-134802 crio[672]: time="2024-12-12 00:45:25.540613317Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f6d40ef1e9ae705b2f36579555948b0511c8dbddb262c26debdd3e285c14bc87,PodSandboxId:82af4f8d6aca4b4d9c0db3aefb218d55897d0fb41fedee8049acf4fe5cb14384,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733964317583385690,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-djdb5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d42c3141-50e3-42cf-97f1-1882639f83d8,},Annotations:map[string]string{io.kubernetes.container.hash: 4ed9f1de,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd6ff4627e13d5d02205eb57d8f235a496582df73c458e8d6c815a37cd6c515,PodSandboxId:1793a87a4b2b47685f8da9983c9077da34edee683bbf3d6d6fba46e817637b11,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733964310381607700,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 6642d1e0-ecff-4830-897e-bbeaba84df2b,},Annotations:map[string]string{io.kubernetes.container.hash: 94424e89,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bfc654c074ab4e46fa63fbdd1e67cb3539130919be7632080b95b1aab9debd,PodSandboxId:de22e4484738ff1599e3cf17ee4abc7bb0a89b004d0e032e825372531ab511b7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733964310134112586,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m5pw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1
551f28-3107-41ba-b271-29272d461671,},Annotations:map[string]string{io.kubernetes.container.hash: ee565a25,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d769ff943593df6c18c27c18e6c84df3218fe7692abc643c63773483170735d3,PodSandboxId:dfef157ceb194b2fac16a33d7b41c1848dd8c3d57ffb02ec710f6ea3a179190b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733964304161977079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70b3d994ee80c7c52908d8f555e9f214,},Anno
tations:map[string]string{io.kubernetes.container.hash: af9757,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af5f5a25dd83bf851abc56dbb88a821573f1d48456411e891c8099d035341ff2,PodSandboxId:14e33e5755b94a051c7152164004f68554aeaba49bc7c37731609b7f198fb48c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733964304107465866,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88d221582497b883ffd1699b9
8dafa31,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb8d47df5a621d4b3199bc984b19315c2a0097014c4ba811c54bdbfda365207,PodSandboxId:d102bdd28a051e011ed4e0166e2d7a9a7da9ef5ee404e7235b2435ddc81bd16b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733964304059982916,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b06f61a739063da64a7047b950fea0b0,},A
nnotations:map[string]string{io.kubernetes.container.hash: 33eda82f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2480975830f7f5feb7699bd51f9acb675615635ea040e89b36d0e53297d5fd16,PodSandboxId:11360547e801d47705ceebc0adb1f674a403e4b5a620d4d12fdd2dc28c717a43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733964304011030613,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-134802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c527b3d6240739e5536f8b88567dee4,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c7ab4f1-37c8-447a-af44-c7648d986cb2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f6d40ef1e9ae7       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   82af4f8d6aca4       coredns-6d4b75cb6d-djdb5
	5dd6ff4627e13       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   1793a87a4b2b4       storage-provisioner
	a5bfc654c074a       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   de22e4484738f       kube-proxy-m5pw7
	d769ff943593d       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   dfef157ceb194       etcd-test-preload-134802
	af5f5a25dd83b       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   14e33e5755b94       kube-controller-manager-test-preload-134802
	deb8d47df5a62       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   d102bdd28a051       kube-apiserver-test-preload-134802
	2480975830f7f       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   11360547e801d       kube-scheduler-test-preload-134802
	
	
	==> coredns [f6d40ef1e9ae705b2f36579555948b0511c8dbddb262c26debdd3e285c14bc87] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:36539 - 25496 "HINFO IN 3151112364239970141.7032448943095281662. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012822881s
	
	
	==> describe nodes <==
	Name:               test-preload-134802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-134802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=test-preload-134802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T00_43_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 00:43:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-134802
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 00:45:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 00:45:18 +0000   Thu, 12 Dec 2024 00:43:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 00:45:18 +0000   Thu, 12 Dec 2024 00:43:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 00:45:18 +0000   Thu, 12 Dec 2024 00:43:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 00:45:18 +0000   Thu, 12 Dec 2024 00:45:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.6
	  Hostname:    test-preload-134802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9be29745a680458aa1f63a94dd9dac3c
	  System UUID:                9be29745-a680-458a-a1f6-3a94dd9dac3c
	  Boot ID:                    27d97c77-8884-4f78-aa08-f105d9a1c326
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-djdb5                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     105s
	  kube-system                 etcd-test-preload-134802                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         117s
	  kube-system                 kube-apiserver-test-preload-134802             250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-test-preload-134802    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-m5pw7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-test-preload-134802             100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x5 over 2m6s)  kubelet          Node test-preload-134802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x5 over 2m6s)  kubelet          Node test-preload-134802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x4 over 2m6s)  kubelet          Node test-preload-134802 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node test-preload-134802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node test-preload-134802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node test-preload-134802 status is now: NodeHasSufficientPID
	  Normal  NodeReady                107s                 kubelet          Node test-preload-134802 status is now: NodeReady
	  Normal  RegisteredNode           105s                 node-controller  Node test-preload-134802 event: Registered Node test-preload-134802 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-134802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-134802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-134802 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-134802 event: Registered Node test-preload-134802 in Controller
	
	
	==> dmesg <==
	[Dec12 00:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052706] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042262] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.902481] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.722908] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.631590] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.524451] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.061506] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068485] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.173543] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.146147] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.268871] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[Dec12 00:45] systemd-fstab-generator[992]: Ignoring "noauto" option for root device
	[  +0.056386] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.657580] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +6.944511] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.150061] systemd-fstab-generator[1754]: Ignoring "noauto" option for root device
	[  +6.170809] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [d769ff943593df6c18c27c18e6c84df3218fe7692abc643c63773483170735d3] <==
	{"level":"info","ts":"2024-12-12T00:45:04.663Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"6f26d2d338759d80","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-12T00:45:04.666Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-12T00:45:04.669Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-12T00:45:04.670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 switched to configuration voters=(8009320791952170368)"}
	{"level":"info","ts":"2024-12-12T00:45:04.677Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","added-peer-id":"6f26d2d338759d80","added-peer-peer-urls":["https://192.168.39.6:2380"]}
	{"level":"info","ts":"2024-12-12T00:45:04.677Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1a1020f766a5ac01","local-member-id":"6f26d2d338759d80","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T00:45:04.677Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T00:45:04.679Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6f26d2d338759d80","initial-advertise-peer-urls":["https://192.168.39.6:2380"],"listen-peer-urls":["https://192.168.39.6:2380"],"advertise-client-urls":["https://192.168.39.6:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.6:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-12T00:45:04.679Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-12T00:45:04.672Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-12-12T00:45:04.680Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.6:2380"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgPreVoteResp from 6f26d2d338759d80 at term 2"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became candidate at term 3"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 received MsgVoteResp from 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6f26d2d338759d80 became leader at term 3"}
	{"level":"info","ts":"2024-12-12T00:45:05.925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6f26d2d338759d80 elected leader 6f26d2d338759d80 at term 3"}
	{"level":"info","ts":"2024-12-12T00:45:05.930Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"6f26d2d338759d80","local-member-attributes":"{Name:test-preload-134802 ClientURLs:[https://192.168.39.6:2379]}","request-path":"/0/members/6f26d2d338759d80/attributes","cluster-id":"1a1020f766a5ac01","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-12T00:45:05.931Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T00:45:05.931Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T00:45:05.933Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.6:2379"}
	{"level":"info","ts":"2024-12-12T00:45:05.934Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-12T00:45:05.934Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T00:45:05.935Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:45:25 up 0 min,  0 users,  load average: 0.61, 0.19, 0.07
	Linux test-preload-134802 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [deb8d47df5a621d4b3199bc984b19315c2a0097014c4ba811c54bdbfda365207] <==
	I1212 00:45:08.295603       1 establishing_controller.go:76] Starting EstablishingController
	I1212 00:45:08.296064       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1212 00:45:08.296194       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1212 00:45:08.296292       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1212 00:45:08.308360       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1212 00:45:08.308452       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1212 00:45:08.441970       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1212 00:45:08.458351       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1212 00:45:08.458538       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 00:45:08.459472       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1212 00:45:08.469474       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1212 00:45:08.469561       1 cache.go:39] Caches are synced for autoregister controller
	I1212 00:45:08.496041       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 00:45:08.504354       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1212 00:45:08.509602       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1212 00:45:08.958134       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1212 00:45:09.273923       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 00:45:10.073346       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1212 00:45:10.082383       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1212 00:45:10.126794       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1212 00:45:10.164477       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1212 00:45:10.170528       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1212 00:45:10.630214       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1212 00:45:21.497243       1 controller.go:611] quota admission added evaluator for: endpoints
	I1212 00:45:21.507345       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [af5f5a25dd83bf851abc56dbb88a821573f1d48456411e891c8099d035341ff2] <==
	I1212 00:45:21.474832       1 shared_informer.go:262] Caches are synced for service account
	I1212 00:45:21.476108       1 shared_informer.go:262] Caches are synced for daemon sets
	I1212 00:45:21.477272       1 shared_informer.go:262] Caches are synced for HPA
	I1212 00:45:21.481735       1 shared_informer.go:262] Caches are synced for endpoint
	I1212 00:45:21.481844       1 shared_informer.go:262] Caches are synced for deployment
	I1212 00:45:21.483292       1 shared_informer.go:262] Caches are synced for PVC protection
	I1212 00:45:21.484755       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1212 00:45:21.485711       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1212 00:45:21.485900       1 shared_informer.go:262] Caches are synced for expand
	I1212 00:45:21.492456       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1212 00:45:21.493014       1 shared_informer.go:262] Caches are synced for stateful set
	I1212 00:45:21.497554       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1212 00:45:21.505998       1 event.go:294] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I1212 00:45:21.543748       1 shared_informer.go:262] Caches are synced for job
	I1212 00:45:21.582999       1 shared_informer.go:262] Caches are synced for disruption
	I1212 00:45:21.583085       1 disruption.go:371] Sending events to api server.
	I1212 00:45:21.583056       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1212 00:45:21.595545       1 shared_informer.go:262] Caches are synced for cronjob
	I1212 00:45:21.602909       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1212 00:45:21.665803       1 shared_informer.go:262] Caches are synced for resource quota
	I1212 00:45:21.667595       1 shared_informer.go:262] Caches are synced for resource quota
	I1212 00:45:21.689735       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1212 00:45:22.082757       1 shared_informer.go:262] Caches are synced for garbage collector
	I1212 00:45:22.082798       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1212 00:45:22.112608       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [a5bfc654c074ab4e46fa63fbdd1e67cb3539130919be7632080b95b1aab9debd] <==
	I1212 00:45:10.480671       1 node.go:163] Successfully retrieved node IP: 192.168.39.6
	I1212 00:45:10.480866       1 server_others.go:138] "Detected node IP" address="192.168.39.6"
	I1212 00:45:10.481347       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1212 00:45:10.610668       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1212 00:45:10.614514       1 server_others.go:206] "Using iptables Proxier"
	I1212 00:45:10.614655       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1212 00:45:10.615845       1 server.go:661] "Version info" version="v1.24.4"
	I1212 00:45:10.616252       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 00:45:10.620097       1 config.go:317] "Starting service config controller"
	I1212 00:45:10.620321       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1212 00:45:10.620941       1 config.go:226] "Starting endpoint slice config controller"
	I1212 00:45:10.623593       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1212 00:45:10.622886       1 config.go:444] "Starting node config controller"
	I1212 00:45:10.623905       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1212 00:45:10.721283       1 shared_informer.go:262] Caches are synced for service config
	I1212 00:45:10.724683       1 shared_informer.go:262] Caches are synced for node config
	I1212 00:45:10.724700       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2480975830f7f5feb7699bd51f9acb675615635ea040e89b36d0e53297d5fd16] <==
	W1212 00:45:08.423087       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 00:45:08.423174       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1212 00:45:08.423285       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 00:45:08.423366       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 00:45:08.427376       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 00:45:08.427476       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1212 00:45:08.427486       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 00:45:08.427492       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1212 00:45:08.427609       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 00:45:08.427698       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1212 00:45:08.427839       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 00:45:08.427928       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1212 00:45:08.428210       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1212 00:45:08.428300       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1212 00:45:08.428460       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 00:45:08.428489       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1212 00:45:08.428590       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 00:45:08.428688       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1212 00:45:08.428799       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 00:45:08.428825       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1212 00:45:08.429025       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 00:45:08.429109       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1212 00:45:08.429245       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 00:45:08.429333       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1212 00:45:09.400305       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 00:45:08 test-preload-134802 kubelet[1127]: I1212 00:45:08.528615    1127 setters.go:532] "Node became not ready" node="test-preload-134802" condition={Type:Ready Status:False LastHeartbeatTime:2024-12-12 00:45:08.52853256 +0000 UTC m=+5.354745535 LastTransitionTime:2024-12-12 00:45:08.52853256 +0000 UTC m=+5.354745535 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Dec 12 00:45:08 test-preload-134802 kubelet[1127]: E1212 00:45:08.881667    1127 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-test-preload-134802\" already exists" pod="kube-system/kube-apiserver-test-preload-134802"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.325843    1127 apiserver.go:52] "Watching apiserver"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.329091    1127 topology_manager.go:200] "Topology Admit Handler"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.329192    1127 topology_manager.go:200] "Topology Admit Handler"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.329238    1127 topology_manager.go:200] "Topology Admit Handler"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: E1212 00:45:09.330293    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-djdb5" podUID=d42c3141-50e3-42cf-97f1-1882639f83d8
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382465    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k546c\" (UniqueName: \"kubernetes.io/projected/d1551f28-3107-41ba-b271-29272d461671-kube-api-access-k546c\") pod \"kube-proxy-m5pw7\" (UID: \"d1551f28-3107-41ba-b271-29272d461671\") " pod="kube-system/kube-proxy-m5pw7"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382503    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1551f28-3107-41ba-b271-29272d461671-lib-modules\") pod \"kube-proxy-m5pw7\" (UID: \"d1551f28-3107-41ba-b271-29272d461671\") " pod="kube-system/kube-proxy-m5pw7"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382525    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzss\" (UniqueName: \"kubernetes.io/projected/6642d1e0-ecff-4830-897e-bbeaba84df2b-kube-api-access-mmzss\") pod \"storage-provisioner\" (UID: \"6642d1e0-ecff-4830-897e-bbeaba84df2b\") " pod="kube-system/storage-provisioner"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382548    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmt6\" (UniqueName: \"kubernetes.io/projected/d42c3141-50e3-42cf-97f1-1882639f83d8-kube-api-access-cmmt6\") pod \"coredns-6d4b75cb6d-djdb5\" (UID: \"d42c3141-50e3-42cf-97f1-1882639f83d8\") " pod="kube-system/coredns-6d4b75cb6d-djdb5"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382569    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1551f28-3107-41ba-b271-29272d461671-kube-proxy\") pod \"kube-proxy-m5pw7\" (UID: \"d1551f28-3107-41ba-b271-29272d461671\") " pod="kube-system/kube-proxy-m5pw7"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382586    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1551f28-3107-41ba-b271-29272d461671-xtables-lock\") pod \"kube-proxy-m5pw7\" (UID: \"d1551f28-3107-41ba-b271-29272d461671\") " pod="kube-system/kube-proxy-m5pw7"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382605    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6642d1e0-ecff-4830-897e-bbeaba84df2b-tmp\") pod \"storage-provisioner\" (UID: \"6642d1e0-ecff-4830-897e-bbeaba84df2b\") " pod="kube-system/storage-provisioner"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382622    1127 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume\") pod \"coredns-6d4b75cb6d-djdb5\" (UID: \"d42c3141-50e3-42cf-97f1-1882639f83d8\") " pod="kube-system/coredns-6d4b75cb6d-djdb5"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: I1212 00:45:09.382639    1127 reconciler.go:159] "Reconciler: start to sync state"
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: E1212 00:45:09.487221    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: E1212 00:45:09.487365    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume podName:d42c3141-50e3-42cf-97f1-1882639f83d8 nodeName:}" failed. No retries permitted until 2024-12-12 00:45:09.98729284 +0000 UTC m=+6.813505819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume") pod "coredns-6d4b75cb6d-djdb5" (UID: "d42c3141-50e3-42cf-97f1-1882639f83d8") : object "kube-system"/"coredns" not registered
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: E1212 00:45:09.996917    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:45:09 test-preload-134802 kubelet[1127]: E1212 00:45:09.996977    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume podName:d42c3141-50e3-42cf-97f1-1882639f83d8 nodeName:}" failed. No retries permitted until 2024-12-12 00:45:10.996964177 +0000 UTC m=+7.823177152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume") pod "coredns-6d4b75cb6d-djdb5" (UID: "d42c3141-50e3-42cf-97f1-1882639f83d8") : object "kube-system"/"coredns" not registered
	Dec 12 00:45:11 test-preload-134802 kubelet[1127]: E1212 00:45:11.004059    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:45:11 test-preload-134802 kubelet[1127]: E1212 00:45:11.004692    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume podName:d42c3141-50e3-42cf-97f1-1882639f83d8 nodeName:}" failed. No retries permitted until 2024-12-12 00:45:13.004674104 +0000 UTC m=+9.830887077 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume") pod "coredns-6d4b75cb6d-djdb5" (UID: "d42c3141-50e3-42cf-97f1-1882639f83d8") : object "kube-system"/"coredns" not registered
	Dec 12 00:45:11 test-preload-134802 kubelet[1127]: E1212 00:45:11.427865    1127 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-djdb5" podUID=d42c3141-50e3-42cf-97f1-1882639f83d8
	Dec 12 00:45:13 test-preload-134802 kubelet[1127]: E1212 00:45:13.023653    1127 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 12 00:45:13 test-preload-134802 kubelet[1127]: E1212 00:45:13.023740    1127 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume podName:d42c3141-50e3-42cf-97f1-1882639f83d8 nodeName:}" failed. No retries permitted until 2024-12-12 00:45:17.023724761 +0000 UTC m=+13.849937723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d42c3141-50e3-42cf-97f1-1882639f83d8-config-volume") pod "coredns-6d4b75cb6d-djdb5" (UID: "d42c3141-50e3-42cf-97f1-1882639f83d8") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [5dd6ff4627e13d5d02205eb57d8f235a496582df73c458e8d6c815a37cd6c515] <==
	I1212 00:45:10.537874       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-134802 -n test-preload-134802
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-134802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-134802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-134802
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-134802: (1.140721559s)
--- FAIL: TestPreload (212.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (411.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.409531377s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-459384] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-459384" primary control-plane node in "kubernetes-upgrade-459384" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:47:21.800392  130521 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:47:21.800520  130521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:47:21.800532  130521 out.go:358] Setting ErrFile to fd 2...
	I1212 00:47:21.800539  130521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:47:21.800709  130521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:47:21.801293  130521 out.go:352] Setting JSON to false
	I1212 00:47:21.802263  130521 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12584,"bootTime":1733951858,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:47:21.802316  130521 start.go:139] virtualization: kvm guest
	I1212 00:47:21.804809  130521 out.go:177] * [kubernetes-upgrade-459384] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:47:21.806494  130521 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:47:21.806556  130521 notify.go:220] Checking for updates...
	I1212 00:47:21.808837  130521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:47:21.811833  130521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:47:21.814020  130521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:47:21.816382  130521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:47:21.818756  130521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:47:21.820125  130521 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:47:21.857916  130521 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 00:47:21.859111  130521 start.go:297] selected driver: kvm2
	I1212 00:47:21.859123  130521 start.go:901] validating driver "kvm2" against <nil>
	I1212 00:47:21.859136  130521 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:47:21.860123  130521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:47:21.877306  130521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:47:21.893475  130521 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:47:21.893538  130521 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1212 00:47:21.893851  130521 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 00:47:21.893885  130521 cni.go:84] Creating CNI manager for ""
	I1212 00:47:21.893944  130521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:47:21.893959  130521 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 00:47:21.894025  130521 start.go:340] cluster config:
	{Name:kubernetes-upgrade-459384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:47:21.894174  130521 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:47:21.895951  130521 out.go:177] * Starting "kubernetes-upgrade-459384" primary control-plane node in "kubernetes-upgrade-459384" cluster
	I1212 00:47:21.897372  130521 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:47:21.897433  130521 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:47:21.897450  130521 cache.go:56] Caching tarball of preloaded images
	I1212 00:47:21.897547  130521 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:47:21.897562  130521 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:47:21.897984  130521 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/config.json ...
	I1212 00:47:21.898018  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/config.json: {Name:mk3727e8ce02bf50403c9583ee18698234060f33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:47:21.898198  130521 start.go:360] acquireMachinesLock for kubernetes-upgrade-459384: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:47:45.136778  130521 start.go:364] duration metric: took 23.238545645s to acquireMachinesLock for "kubernetes-upgrade-459384"
	I1212 00:47:45.136852  130521 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-459384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:47:45.137005  130521 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 00:47:45.139105  130521 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:47:45.139302  130521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:47:45.139389  130521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:47:45.156145  130521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I1212 00:47:45.156522  130521 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:47:45.157091  130521 main.go:141] libmachine: Using API Version  1
	I1212 00:47:45.157111  130521 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:47:45.157453  130521 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:47:45.157652  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:47:45.157797  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:47:45.157977  130521 start.go:159] libmachine.API.Create for "kubernetes-upgrade-459384" (driver="kvm2")
	I1212 00:47:45.158036  130521 client.go:168] LocalClient.Create starting
	I1212 00:47:45.158079  130521 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:47:45.158122  130521 main.go:141] libmachine: Decoding PEM data...
	I1212 00:47:45.158146  130521 main.go:141] libmachine: Parsing certificate...
	I1212 00:47:45.158220  130521 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:47:45.158248  130521 main.go:141] libmachine: Decoding PEM data...
	I1212 00:47:45.158266  130521 main.go:141] libmachine: Parsing certificate...
	I1212 00:47:45.158293  130521 main.go:141] libmachine: Running pre-create checks...
	I1212 00:47:45.158309  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .PreCreateCheck
	I1212 00:47:45.158633  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetConfigRaw
	I1212 00:47:45.159029  130521 main.go:141] libmachine: Creating machine...
	I1212 00:47:45.159044  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .Create
	I1212 00:47:45.159174  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Creating KVM machine...
	I1212 00:47:45.160237  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found existing default KVM network
	I1212 00:47:45.161024  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:45.160881  130833 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4c:e8:f4} reservation:<nil>}
	I1212 00:47:45.161570  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:45.161508  130833 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002463e0}
	I1212 00:47:45.161611  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | created network xml: 
	I1212 00:47:45.161631  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | <network>
	I1212 00:47:45.161642  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |   <name>mk-kubernetes-upgrade-459384</name>
	I1212 00:47:45.161650  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |   <dns enable='no'/>
	I1212 00:47:45.161679  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |   
	I1212 00:47:45.161691  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1212 00:47:45.161700  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |     <dhcp>
	I1212 00:47:45.161714  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1212 00:47:45.161725  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |     </dhcp>
	I1212 00:47:45.161734  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |   </ip>
	I1212 00:47:45.161745  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG |   
	I1212 00:47:45.161753  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | </network>
	I1212 00:47:45.161763  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | 
	I1212 00:47:45.166642  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | trying to create private KVM network mk-kubernetes-upgrade-459384 192.168.50.0/24...
	I1212 00:47:45.240836  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | private KVM network mk-kubernetes-upgrade-459384 192.168.50.0/24 created
	I1212 00:47:45.240878  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384 ...
	I1212 00:47:45.240893  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:45.240794  130833 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:47:45.240928  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:47:45.240954  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:47:45.516027  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:45.515880  130833 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa...
	I1212 00:47:45.698520  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:45.698367  130833 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/kubernetes-upgrade-459384.rawdisk...
	I1212 00:47:45.698562  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Writing magic tar header
	I1212 00:47:45.698583  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Writing SSH key tar header
	I1212 00:47:45.698607  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:45.698513  130833 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384 ...
	I1212 00:47:45.698630  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384
	I1212 00:47:45.698697  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384 (perms=drwx------)
	I1212 00:47:45.698721  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:47:45.698733  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:47:45.698751  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:47:45.698760  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:47:45.698779  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:47:45.698793  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:47:45.698807  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:47:45.698818  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Creating domain...
	I1212 00:47:45.698828  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:47:45.698845  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:47:45.698854  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:47:45.698860  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Checking permissions on dir: /home
	I1212 00:47:45.698868  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Skipping /home - not owner
	I1212 00:47:45.700052  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) define libvirt domain using xml: 
	I1212 00:47:45.700078  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) <domain type='kvm'>
	I1212 00:47:45.700091  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <name>kubernetes-upgrade-459384</name>
	I1212 00:47:45.700106  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <memory unit='MiB'>2200</memory>
	I1212 00:47:45.700117  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <vcpu>2</vcpu>
	I1212 00:47:45.700128  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <features>
	I1212 00:47:45.700139  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <acpi/>
	I1212 00:47:45.700148  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <apic/>
	I1212 00:47:45.700178  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <pae/>
	I1212 00:47:45.700188  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     
	I1212 00:47:45.700196  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   </features>
	I1212 00:47:45.700204  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <cpu mode='host-passthrough'>
	I1212 00:47:45.700233  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   
	I1212 00:47:45.700260  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   </cpu>
	I1212 00:47:45.700324  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <os>
	I1212 00:47:45.700341  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <type>hvm</type>
	I1212 00:47:45.700359  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <boot dev='cdrom'/>
	I1212 00:47:45.700374  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <boot dev='hd'/>
	I1212 00:47:45.700386  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <bootmenu enable='no'/>
	I1212 00:47:45.700396  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   </os>
	I1212 00:47:45.700404  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   <devices>
	I1212 00:47:45.700416  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <disk type='file' device='cdrom'>
	I1212 00:47:45.700433  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/boot2docker.iso'/>
	I1212 00:47:45.700454  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <target dev='hdc' bus='scsi'/>
	I1212 00:47:45.700467  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <readonly/>
	I1212 00:47:45.700480  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </disk>
	I1212 00:47:45.700492  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <disk type='file' device='disk'>
	I1212 00:47:45.700505  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:47:45.700543  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/kubernetes-upgrade-459384.rawdisk'/>
	I1212 00:47:45.700568  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <target dev='hda' bus='virtio'/>
	I1212 00:47:45.700594  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </disk>
	I1212 00:47:45.700613  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <interface type='network'>
	I1212 00:47:45.700627  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <source network='mk-kubernetes-upgrade-459384'/>
	I1212 00:47:45.700647  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <model type='virtio'/>
	I1212 00:47:45.700658  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </interface>
	I1212 00:47:45.700666  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <interface type='network'>
	I1212 00:47:45.700677  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <source network='default'/>
	I1212 00:47:45.700691  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <model type='virtio'/>
	I1212 00:47:45.700703  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </interface>
	I1212 00:47:45.700713  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <serial type='pty'>
	I1212 00:47:45.700722  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <target port='0'/>
	I1212 00:47:45.700731  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </serial>
	I1212 00:47:45.700744  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <console type='pty'>
	I1212 00:47:45.700755  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <target type='serial' port='0'/>
	I1212 00:47:45.700775  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </console>
	I1212 00:47:45.700796  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     <rng model='virtio'>
	I1212 00:47:45.700808  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)       <backend model='random'>/dev/random</backend>
	I1212 00:47:45.700818  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     </rng>
	I1212 00:47:45.700825  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     
	I1212 00:47:45.700834  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)     
	I1212 00:47:45.700842  130521 main.go:141] libmachine: (kubernetes-upgrade-459384)   </devices>
	I1212 00:47:45.700852  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) </domain>
	I1212 00:47:45.700862  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) 
	I1212 00:47:45.704970  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fd:13:4b in network default
	I1212 00:47:45.705545  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:45.705560  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Ensuring networks are active...
	I1212 00:47:45.706317  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Ensuring network default is active
	I1212 00:47:45.706704  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Ensuring network mk-kubernetes-upgrade-459384 is active
	I1212 00:47:45.707180  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Getting domain xml...
	I1212 00:47:45.707996  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Creating domain...
	I1212 00:47:47.013062  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Waiting to get IP...
	I1212 00:47:47.014035  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.014469  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.014535  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:47.014466  130833 retry.go:31] will retry after 232.868668ms: waiting for machine to come up
	I1212 00:47:47.249146  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.249741  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.249774  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:47.249694  130833 retry.go:31] will retry after 281.202063ms: waiting for machine to come up
	I1212 00:47:47.532356  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.532811  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.532840  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:47.532784  130833 retry.go:31] will retry after 373.190752ms: waiting for machine to come up
	I1212 00:47:47.907546  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.907990  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:47.908013  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:47.907958  130833 retry.go:31] will retry after 398.124593ms: waiting for machine to come up
	I1212 00:47:48.307524  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:48.308113  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:48.308137  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:48.308056  130833 retry.go:31] will retry after 615.29467ms: waiting for machine to come up
	I1212 00:47:48.924908  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:48.925423  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:48.925455  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:48.925376  130833 retry.go:31] will retry after 922.845459ms: waiting for machine to come up
	I1212 00:47:49.849824  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:49.850365  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:49.850392  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:49.850290  130833 retry.go:31] will retry after 975.549222ms: waiting for machine to come up
	I1212 00:47:50.827648  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:50.828255  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:50.828325  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:50.828226  130833 retry.go:31] will retry after 1.17560748s: waiting for machine to come up
	I1212 00:47:52.005683  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:52.006081  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:52.006143  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:52.006058  130833 retry.go:31] will retry after 1.586754446s: waiting for machine to come up
	I1212 00:47:53.594856  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:53.595266  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:53.595296  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:53.595220  130833 retry.go:31] will retry after 2.240564873s: waiting for machine to come up
	I1212 00:47:55.838043  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:55.838626  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:55.838658  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:55.838585  130833 retry.go:31] will retry after 2.734073566s: waiting for machine to come up
	I1212 00:47:58.576421  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:47:58.576876  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:47:58.576902  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:47:58.576834  130833 retry.go:31] will retry after 2.245533288s: waiting for machine to come up
	I1212 00:48:00.823529  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:00.823867  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:48:00.823892  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:48:00.823828  130833 retry.go:31] will retry after 3.730509242s: waiting for machine to come up
	I1212 00:48:04.558561  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:04.558929  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find current IP address of domain kubernetes-upgrade-459384 in network mk-kubernetes-upgrade-459384
	I1212 00:48:04.558963  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | I1212 00:48:04.558908  130833 retry.go:31] will retry after 4.293990502s: waiting for machine to come up
	I1212 00:48:08.857492  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:08.858032  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has current primary IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:08.858064  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Found IP for machine: 192.168.50.209
	I1212 00:48:08.858078  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Reserving static IP address...
	I1212 00:48:08.858400  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-459384", mac: "52:54:00:fb:4f:45", ip: "192.168.50.209"} in network mk-kubernetes-upgrade-459384
	I1212 00:48:08.931557  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Reserved static IP address: 192.168.50.209
	I1212 00:48:08.931586  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Waiting for SSH to be available...
	I1212 00:48:08.931609  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Getting to WaitForSSH function...
	I1212 00:48:08.934470  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:08.934963  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384
	I1212 00:48:08.934993  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-459384 interface with MAC address 52:54:00:fb:4f:45
	I1212 00:48:08.935160  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Using SSH client type: external
	I1212 00:48:08.935183  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa (-rw-------)
	I1212 00:48:08.935251  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:48:08.935288  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | About to run SSH command:
	I1212 00:48:08.935313  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | exit 0
	I1212 00:48:08.939070  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | SSH cmd err, output: exit status 255: 
	I1212 00:48:08.939107  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1212 00:48:08.939130  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | command : exit 0
	I1212 00:48:08.939143  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | err     : exit status 255
	I1212 00:48:08.939182  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | output  : 
	I1212 00:48:11.939959  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Getting to WaitForSSH function...
	I1212 00:48:11.942469  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:11.942841  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:11.942889  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:11.942981  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Using SSH client type: external
	I1212 00:48:11.943005  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa (-rw-------)
	I1212 00:48:11.943035  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:48:11.943048  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | About to run SSH command:
	I1212 00:48:11.943066  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | exit 0
	I1212 00:48:12.067357  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | SSH cmd err, output: <nil>: 
	I1212 00:48:12.067652  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) KVM machine creation complete!
	I1212 00:48:12.068017  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetConfigRaw
	I1212 00:48:12.068685  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:12.068883  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:12.069038  130521 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:48:12.069055  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetState
	I1212 00:48:12.070334  130521 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:48:12.070350  130521 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:48:12.070355  130521 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:48:12.070361  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.072573  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.072961  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.072991  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.073166  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.073320  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.073491  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.073609  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.073799  130521 main.go:141] libmachine: Using SSH client type: native
	I1212 00:48:12.074044  130521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:48:12.074059  130521 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:48:12.170790  130521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:48:12.170829  130521 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:48:12.170842  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.173671  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.174023  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.174050  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.174226  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.174424  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.174560  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.174674  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.174854  130521 main.go:141] libmachine: Using SSH client type: native
	I1212 00:48:12.175018  130521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:48:12.175028  130521 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:48:12.276182  130521 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:48:12.276272  130521 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:48:12.276285  130521 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:48:12.276294  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:48:12.276550  130521 buildroot.go:166] provisioning hostname "kubernetes-upgrade-459384"
	I1212 00:48:12.276589  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:48:12.276787  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.279348  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.279753  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.279785  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.279884  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.280072  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.280219  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.280358  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.280517  130521 main.go:141] libmachine: Using SSH client type: native
	I1212 00:48:12.280701  130521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:48:12.280722  130521 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-459384 && echo "kubernetes-upgrade-459384" | sudo tee /etc/hostname
	I1212 00:48:12.395049  130521 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-459384
	
	I1212 00:48:12.395080  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.397951  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.398309  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.398364  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.398502  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.398665  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.398835  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.398959  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.399105  130521 main.go:141] libmachine: Using SSH client type: native
	I1212 00:48:12.399339  130521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:48:12.399364  130521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-459384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-459384/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-459384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:48:12.510670  130521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:48:12.510711  130521 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:48:12.510755  130521 buildroot.go:174] setting up certificates
	I1212 00:48:12.510769  130521 provision.go:84] configureAuth start
	I1212 00:48:12.510791  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:48:12.511088  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:48:12.513987  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.514357  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.514386  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.514552  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.516775  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.517071  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.517141  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.517211  130521 provision.go:143] copyHostCerts
	I1212 00:48:12.517275  130521 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:48:12.517308  130521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:48:12.517378  130521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:48:12.517509  130521 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:48:12.517521  130521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:48:12.517544  130521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:48:12.517599  130521 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:48:12.517606  130521 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:48:12.517623  130521 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:48:12.517666  130521 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-459384 san=[127.0.0.1 192.168.50.209 kubernetes-upgrade-459384 localhost minikube]
	I1212 00:48:12.581144  130521 provision.go:177] copyRemoteCerts
	I1212 00:48:12.581208  130521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:48:12.581235  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.583815  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.584330  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.584361  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.584513  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.584721  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.584840  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.584985  130521 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:48:12.666300  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:48:12.694867  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 00:48:12.722522  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:48:12.746543  130521 provision.go:87] duration metric: took 235.751369ms to configureAuth
	I1212 00:48:12.746580  130521 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:48:12.746809  130521 config.go:182] Loaded profile config "kubernetes-upgrade-459384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:48:12.746924  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.749865  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.750242  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.750278  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.750408  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.750571  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.750730  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.750848  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.751009  130521 main.go:141] libmachine: Using SSH client type: native
	I1212 00:48:12.751205  130521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:48:12.751224  130521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:48:12.964483  130521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:48:12.964550  130521 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:48:12.964565  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetURL
	I1212 00:48:12.965835  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | Using libvirt version 6000000
	I1212 00:48:12.968074  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.968446  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.968476  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.968678  130521 main.go:141] libmachine: Docker is up and running!
	I1212 00:48:12.968691  130521 main.go:141] libmachine: Reticulating splines...
	I1212 00:48:12.968698  130521 client.go:171] duration metric: took 27.810650691s to LocalClient.Create
	I1212 00:48:12.968735  130521 start.go:167] duration metric: took 27.810749066s to libmachine.API.Create "kubernetes-upgrade-459384"
	I1212 00:48:12.968747  130521 start.go:293] postStartSetup for "kubernetes-upgrade-459384" (driver="kvm2")
	I1212 00:48:12.968756  130521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:48:12.968773  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:12.969028  130521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:48:12.969056  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:12.970978  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.971281  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:12.971310  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:12.971473  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:12.971673  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:12.971853  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:12.971993  130521 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:48:13.049472  130521 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:48:13.053968  130521 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:48:13.053998  130521 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:48:13.054111  130521 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:48:13.054191  130521 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:48:13.054286  130521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:48:13.063337  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:48:13.088482  130521 start.go:296] duration metric: took 119.721866ms for postStartSetup
	I1212 00:48:13.088547  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetConfigRaw
	I1212 00:48:13.089127  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:48:13.091558  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.091951  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:13.091972  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.092225  130521 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/config.json ...
	I1212 00:48:13.092410  130521 start.go:128] duration metric: took 27.95539277s to createHost
	I1212 00:48:13.092438  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:13.094678  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.095028  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:13.095058  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.095197  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:13.095347  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:13.095470  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:13.095585  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:13.095730  130521 main.go:141] libmachine: Using SSH client type: native
	I1212 00:48:13.095931  130521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:48:13.095946  130521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:48:13.196298  130521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733964493.172570342
	
	I1212 00:48:13.196324  130521 fix.go:216] guest clock: 1733964493.172570342
	I1212 00:48:13.196332  130521 fix.go:229] Guest: 2024-12-12 00:48:13.172570342 +0000 UTC Remote: 2024-12-12 00:48:13.092421215 +0000 UTC m=+51.341598768 (delta=80.149127ms)
	I1212 00:48:13.196362  130521 fix.go:200] guest clock delta is within tolerance: 80.149127ms
	I1212 00:48:13.196367  130521 start.go:83] releasing machines lock for "kubernetes-upgrade-459384", held for 28.059551353s
	I1212 00:48:13.196392  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:13.196771  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:48:13.199635  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.200015  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:13.200043  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.200159  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:13.200666  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:13.200835  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:48:13.200942  130521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:48:13.200987  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:13.201047  130521 ssh_runner.go:195] Run: cat /version.json
	I1212 00:48:13.201072  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:48:13.203519  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.203830  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.203919  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:13.203950  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.204051  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:13.204172  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:13.204200  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:13.204202  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:13.204383  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:13.204384  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:48:13.204543  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:48:13.204592  130521 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:48:13.204675  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:48:13.204766  130521 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:48:13.281239  130521 ssh_runner.go:195] Run: systemctl --version
	I1212 00:48:13.312821  130521 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:48:13.471341  130521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:48:13.477789  130521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:48:13.477859  130521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:48:13.499285  130521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:48:13.499312  130521 start.go:495] detecting cgroup driver to use...
	I1212 00:48:13.499379  130521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:48:13.518160  130521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:48:13.535337  130521 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:48:13.535394  130521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:48:13.552346  130521 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:48:13.567114  130521 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:48:13.694130  130521 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:48:13.861996  130521 docker.go:233] disabling docker service ...
	I1212 00:48:13.862093  130521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:48:13.877304  130521 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:48:13.890402  130521 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:48:14.016285  130521 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:48:14.150927  130521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:48:14.166759  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:48:14.186219  130521 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 00:48:14.186293  130521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:48:14.197499  130521 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:48:14.197558  130521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:48:14.208580  130521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:48:14.219484  130521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:48:14.230491  130521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:48:14.241766  130521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:48:14.251797  130521 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:48:14.251864  130521 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:48:14.265926  130521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:48:14.275998  130521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:48:14.401841  130521 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:48:14.504032  130521 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:48:14.504125  130521 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:48:14.509135  130521 start.go:563] Will wait 60s for crictl version
	I1212 00:48:14.509199  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:14.513170  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:48:14.551734  130521 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:48:14.551837  130521 ssh_runner.go:195] Run: crio --version
	I1212 00:48:14.578906  130521 ssh_runner.go:195] Run: crio --version
	I1212 00:48:14.609351  130521 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 00:48:14.610601  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:48:14.613297  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:14.613687  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:48:01 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:48:14.613717  130521 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:48:14.613897  130521 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 00:48:14.618548  130521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:48:14.632450  130521 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-459384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:48:14.632608  130521 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:48:14.632677  130521 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:48:14.669123  130521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 00:48:14.669195  130521 ssh_runner.go:195] Run: which lz4
	I1212 00:48:14.673520  130521 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 00:48:14.678290  130521 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:48:14.678327  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 00:48:16.431697  130521 crio.go:462] duration metric: took 1.758203576s to copy over tarball
	I1212 00:48:16.431789  130521 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:48:19.182978  130521 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.751151895s)
	I1212 00:48:19.183013  130521 crio.go:469] duration metric: took 2.751280353s to extract the tarball
	I1212 00:48:19.183023  130521 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:48:19.228259  130521 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:48:19.282947  130521 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 00:48:19.282981  130521 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:48:19.283057  130521 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:48:19.283086  130521 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:19.283101  130521 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.283109  130521 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 00:48:19.283140  130521 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.283226  130521 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.283286  130521 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.283174  130521 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 00:48:19.284728  130521 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.284758  130521 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:48:19.284759  130521 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.284764  130521 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 00:48:19.284785  130521 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:19.284758  130521 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.284810  130521 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 00:48:19.284840  130521 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.459196  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.475326  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.481151  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.484218  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.493911  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:19.526704  130521 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 00:48:19.526755  130521 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.526808  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.539694  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 00:48:19.540789  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 00:48:19.623693  130521 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 00:48:19.623747  130521 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.623782  130521 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 00:48:19.623834  130521 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.623800  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.623881  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.628058  130521 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 00:48:19.628107  130521 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.628152  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.658088  130521 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 00:48:19.658153  130521 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:19.658174  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.658197  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.695964  130521 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 00:48:19.696041  130521 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 00:48:19.696062  130521 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 00:48:19.696100  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.696107  130521 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 00:48:19.696128  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.696154  130521 ssh_runner.go:195] Run: which crictl
	I1212 00:48:19.696194  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.696232  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.740460  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:19.740649  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.813464  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.813485  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:48:19.813548  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.813559  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:48:19.813607  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.819529  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:19.866211  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:48:19.967219  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:48:19.987427  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:48:19.987471  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:48:19.987498  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:48:19.987532  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:48:19.992110  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:48:20.064373  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 00:48:20.064503  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 00:48:20.147073  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:48:20.147123  130521 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:48:20.151999  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 00:48:20.152098  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 00:48:20.152060  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 00:48:20.200342  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 00:48:20.200404  130521 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 00:48:21.561815  130521 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:48:21.707976  130521 cache_images.go:92] duration metric: took 2.424975038s to LoadCachedImages
	W1212 00:48:21.708076  130521 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1212 00:48:21.708097  130521 kubeadm.go:934] updating node { 192.168.50.209 8443 v1.20.0 crio true true} ...
	I1212 00:48:21.708221  130521 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-459384 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:48:21.708326  130521 ssh_runner.go:195] Run: crio config
	I1212 00:48:21.754614  130521 cni.go:84] Creating CNI manager for ""
	I1212 00:48:21.754641  130521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:48:21.754657  130521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:48:21.754683  130521 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.209 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-459384 NodeName:kubernetes-upgrade-459384 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 00:48:21.754838  130521 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-459384"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:48:21.754924  130521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 00:48:21.765964  130521 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:48:21.766045  130521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:48:21.775917  130521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1212 00:48:21.793408  130521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:48:21.811272  130521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1212 00:48:21.828214  130521 ssh_runner.go:195] Run: grep 192.168.50.209	control-plane.minikube.internal$ /etc/hosts
	I1212 00:48:21.832585  130521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:48:21.847181  130521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:48:21.981341  130521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:48:21.998749  130521 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384 for IP: 192.168.50.209
	I1212 00:48:21.998777  130521 certs.go:194] generating shared ca certs ...
	I1212 00:48:21.998799  130521 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:21.998999  130521 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:48:21.999054  130521 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:48:21.999069  130521 certs.go:256] generating profile certs ...
	I1212 00:48:21.999162  130521 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.key
	I1212 00:48:21.999184  130521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.crt with IP's: []
	I1212 00:48:22.131379  130521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.crt ...
	I1212 00:48:22.131408  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.crt: {Name:mkb476ceb78deea477ee640694a209653d3b10b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:22.131573  130521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.key ...
	I1212 00:48:22.131589  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.key: {Name:mk8d6ac24a8982d601d7f8d055c89ef8ebaf35be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:22.131703  130521 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key.23d5d1c4
	I1212 00:48:22.131725  130521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt.23d5d1c4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.209]
	I1212 00:48:22.283021  130521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt.23d5d1c4 ...
	I1212 00:48:22.283053  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt.23d5d1c4: {Name:mk5cec25e283fbc68c7bad6d0cb3fdb08cf8b285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:22.283214  130521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key.23d5d1c4 ...
	I1212 00:48:22.283228  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key.23d5d1c4: {Name:mkf298df75f0852a9cbf012510e44704341cfb76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:22.283324  130521 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt.23d5d1c4 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt
	I1212 00:48:22.283442  130521 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key.23d5d1c4 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key
	I1212 00:48:22.283533  130521 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.key
	I1212 00:48:22.283560  130521 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.crt with IP's: []
	I1212 00:48:22.419416  130521 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.crt ...
	I1212 00:48:22.419448  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.crt: {Name:mk470fab6d07ea19f5ef7f96966781b90b67f735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:22.419637  130521 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.key ...
	I1212 00:48:22.419653  130521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.key: {Name:mk339e060b1f337824ba76ff5b5d0b2f02d3da63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:48:22.419820  130521 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:48:22.419859  130521 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:48:22.419871  130521 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:48:22.419897  130521 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:48:22.419947  130521 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:48:22.419979  130521 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:48:22.420017  130521 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:48:22.420613  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:48:22.447808  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:48:22.473198  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:48:22.501912  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:48:22.529539  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 00:48:22.556848  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:48:22.584529  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:48:22.612814  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:48:22.640627  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:48:22.665294  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:48:22.691111  130521 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:48:22.724282  130521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:48:22.744449  130521 ssh_runner.go:195] Run: openssl version
	I1212 00:48:22.752006  130521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:48:22.766288  130521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:48:22.771011  130521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:48:22.771072  130521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:48:22.777057  130521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:48:22.787850  130521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:48:22.799126  130521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:48:22.803936  130521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:48:22.804015  130521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:48:22.810036  130521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:48:22.821341  130521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:48:22.837886  130521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:48:22.845469  130521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:48:22.845534  130521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:48:22.853514  130521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:48:22.870574  130521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:48:22.875422  130521 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:48:22.875485  130521 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-459384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:48:22.875586  130521 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:48:22.875660  130521 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:48:22.945302  130521 cri.go:89] found id: ""
	I1212 00:48:22.945397  130521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:48:22.958963  130521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:48:22.968594  130521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:48:22.978106  130521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:48:22.978126  130521 kubeadm.go:157] found existing configuration files:
	
	I1212 00:48:22.978177  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:48:22.987645  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:48:22.987712  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:48:22.997513  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:48:23.006650  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:48:23.006714  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:48:23.016151  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:48:23.025236  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:48:23.025279  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:48:23.035048  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:48:23.044763  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:48:23.044812  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:48:23.054692  130521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 00:48:23.183812  130521 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 00:48:23.184073  130521 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 00:48:23.343752  130521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:48:23.343875  130521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:48:23.343989  130521 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:48:23.559898  130521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:48:23.562186  130521 out.go:235]   - Generating certificates and keys ...
	I1212 00:48:23.562313  130521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 00:48:23.562419  130521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 00:48:23.720476  130521 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:48:23.827139  130521 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:48:23.951433  130521 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:48:24.160659  130521 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1212 00:48:24.401124  130521 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1212 00:48:24.401339  130521 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-459384 localhost] and IPs [192.168.50.209 127.0.0.1 ::1]
	I1212 00:48:24.702198  130521 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1212 00:48:24.702458  130521 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-459384 localhost] and IPs [192.168.50.209 127.0.0.1 ::1]
	I1212 00:48:24.868676  130521 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:48:25.151028  130521 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:48:25.258897  130521 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1212 00:48:25.259117  130521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:48:25.497903  130521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:48:25.679267  130521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:48:25.825439  130521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:48:25.915734  130521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:48:25.931668  130521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:48:25.933036  130521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:48:25.933112  130521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 00:48:26.087840  130521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:48:26.090311  130521 out.go:235]   - Booting up control plane ...
	I1212 00:48:26.090435  130521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:48:26.095333  130521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:48:26.101986  130521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:48:26.103128  130521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:48:26.109047  130521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:49:06.105086  130521 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 00:49:06.105967  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:49:06.106273  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:49:11.105898  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:49:11.106171  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:49:21.104998  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:49:21.105286  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:49:41.105216  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:49:41.105549  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:50:21.107403  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:50:21.107690  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:50:21.107710  130521 kubeadm.go:310] 
	I1212 00:50:21.107795  130521 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 00:50:21.107879  130521 kubeadm.go:310] 		timed out waiting for the condition
	I1212 00:50:21.107891  130521 kubeadm.go:310] 
	I1212 00:50:21.107936  130521 kubeadm.go:310] 	This error is likely caused by:
	I1212 00:50:21.107985  130521 kubeadm.go:310] 		- The kubelet is not running
	I1212 00:50:21.108156  130521 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:50:21.108180  130521 kubeadm.go:310] 
	I1212 00:50:21.108343  130521 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:50:21.108409  130521 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 00:50:21.108494  130521 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 00:50:21.108511  130521 kubeadm.go:310] 
	I1212 00:50:21.108698  130521 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 00:50:21.108849  130521 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 00:50:21.108868  130521 kubeadm.go:310] 
	I1212 00:50:21.109010  130521 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 00:50:21.109130  130521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 00:50:21.109237  130521 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 00:50:21.109362  130521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 00:50:21.109386  130521 kubeadm.go:310] 
	I1212 00:50:21.110317  130521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:50:21.110454  130521 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 00:50:21.110566  130521 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:50:21.110743  130521 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-459384 localhost] and IPs [192.168.50.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-459384 localhost] and IPs [192.168.50.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-459384 localhost] and IPs [192.168.50.209 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-459384 localhost] and IPs [192.168.50.209 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:50:21.110801  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:50:21.900101  130521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:50:21.915154  130521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:50:21.925787  130521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:50:21.925810  130521 kubeadm.go:157] found existing configuration files:
	
	I1212 00:50:21.925873  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:50:21.935435  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:50:21.935505  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:50:21.945149  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:50:21.954340  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:50:21.954394  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:50:21.963780  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:50:21.973899  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:50:21.973952  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:50:21.983573  130521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:50:21.992680  130521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:50:21.992739  130521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:50:22.002136  130521 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 00:50:22.073338  130521 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 00:50:22.073486  130521 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 00:50:22.230967  130521 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:50:22.231107  130521 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:50:22.231247  130521 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:50:22.448643  130521 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:50:22.450392  130521 out.go:235]   - Generating certificates and keys ...
	I1212 00:50:22.450512  130521 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 00:50:22.450605  130521 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 00:50:22.450717  130521 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:50:22.450803  130521 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:50:22.450899  130521 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:50:22.450980  130521 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 00:50:22.451075  130521 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:50:22.451184  130521 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:50:22.451297  130521 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:50:22.451416  130521 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:50:22.451475  130521 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 00:50:22.451551  130521 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:50:22.636176  130521 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:50:22.973003  130521 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:50:23.284743  130521 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:50:23.372592  130521 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:50:23.392411  130521 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:50:23.392556  130521 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:50:23.392616  130521 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 00:50:23.530642  130521 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:50:23.532578  130521 out.go:235]   - Booting up control plane ...
	I1212 00:50:23.532708  130521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:50:23.535774  130521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:50:23.537155  130521 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:50:23.539514  130521 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:50:23.541116  130521 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:51:03.545121  130521 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 00:51:03.545222  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:51:03.545483  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:51:08.545592  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:51:08.545886  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:51:18.546113  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:51:18.546393  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:51:38.545279  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:51:38.545614  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:52:18.544639  130521 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:52:18.544917  130521 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:52:18.544945  130521 kubeadm.go:310] 
	I1212 00:52:18.544987  130521 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 00:52:18.545054  130521 kubeadm.go:310] 		timed out waiting for the condition
	I1212 00:52:18.545083  130521 kubeadm.go:310] 
	I1212 00:52:18.545138  130521 kubeadm.go:310] 	This error is likely caused by:
	I1212 00:52:18.545197  130521 kubeadm.go:310] 		- The kubelet is not running
	I1212 00:52:18.545414  130521 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:52:18.545434  130521 kubeadm.go:310] 
	I1212 00:52:18.545563  130521 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:52:18.545624  130521 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 00:52:18.545670  130521 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 00:52:18.545680  130521 kubeadm.go:310] 
	I1212 00:52:18.545829  130521 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 00:52:18.545955  130521 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 00:52:18.545967  130521 kubeadm.go:310] 
	I1212 00:52:18.546131  130521 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 00:52:18.546252  130521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 00:52:18.546360  130521 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 00:52:18.546468  130521 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 00:52:18.546491  130521 kubeadm.go:310] 
	I1212 00:52:18.547795  130521 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:52:18.547903  130521 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 00:52:18.548000  130521 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:52:18.548126  130521 kubeadm.go:394] duration metric: took 3m55.672643197s to StartCluster
	I1212 00:52:18.548182  130521 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:52:18.548237  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:52:18.593736  130521 cri.go:89] found id: ""
	I1212 00:52:18.593772  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.593784  130521 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:52:18.593792  130521 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:52:18.593855  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:52:18.628466  130521 cri.go:89] found id: ""
	I1212 00:52:18.628499  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.628509  130521 logs.go:284] No container was found matching "etcd"
	I1212 00:52:18.628514  130521 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:52:18.628579  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:52:18.663527  130521 cri.go:89] found id: ""
	I1212 00:52:18.663562  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.663574  130521 logs.go:284] No container was found matching "coredns"
	I1212 00:52:18.663581  130521 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:52:18.663670  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:52:18.702188  130521 cri.go:89] found id: ""
	I1212 00:52:18.702221  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.702234  130521 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:52:18.702242  130521 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:52:18.702305  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:52:18.737634  130521 cri.go:89] found id: ""
	I1212 00:52:18.737668  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.737680  130521 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:52:18.737688  130521 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:52:18.737755  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:52:18.772383  130521 cri.go:89] found id: ""
	I1212 00:52:18.772419  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.772430  130521 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:52:18.772438  130521 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:52:18.772506  130521 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:52:18.805882  130521 cri.go:89] found id: ""
	I1212 00:52:18.805914  130521 logs.go:282] 0 containers: []
	W1212 00:52:18.805925  130521 logs.go:284] No container was found matching "kindnet"
	I1212 00:52:18.805939  130521 logs.go:123] Gathering logs for kubelet ...
	I1212 00:52:18.805955  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:52:18.854119  130521 logs.go:123] Gathering logs for dmesg ...
	I1212 00:52:18.854155  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:52:18.867912  130521 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:52:18.867942  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:52:18.991403  130521 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:52:18.991423  130521 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:52:18.991436  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:52:19.097363  130521 logs.go:123] Gathering logs for container status ...
	I1212 00:52:19.097422  130521 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 00:52:19.142781  130521 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:52:19.142866  130521 out.go:270] * 
	* 
	W1212 00:52:19.142991  130521 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:52:19.143009  130521 out.go:270] * 
	* 
	W1212 00:52:19.143808  130521 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:52:19.146924  130521 out.go:201] 
	W1212 00:52:19.148212  130521 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:52:19.148253  130521 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:52:19.148271  130521 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:52:19.149691  130521 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-459384
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-459384: (1.425640613s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-459384 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-459384 status --format={{.Host}}: exit status 7 (68.068956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1212 00:52:38.769391   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.033695927s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-459384 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (84.299991ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-459384] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-459384
	    minikube start -p kubernetes-upgrade-459384 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4593842 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-459384 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-459384 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.48241523s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-12 00:54:09.357420453 +0000 UTC m=+4847.449088059
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-459384 -n kubernetes-upgrade-459384
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-459384 logs -n 25
E1212 00:54:09.695781   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-459384 logs -n 25: (1.812285865s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cilium-018985                      | cilium-018985             | jenkins | v1.34.0 | 12 Dec 24 00:50 UTC | 12 Dec 24 00:50 UTC |
	| start   | -p stopped-upgrade-213355             | minikube                  | jenkins | v1.26.0 | 12 Dec 24 00:50 UTC | 12 Dec 24 00:51 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| pause   | -p pause-409734                       | pause-409734              | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:51 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-409734                       | pause-409734              | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:51 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-409734                       | pause-409734              | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:51 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-409734                       | pause-409734              | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:51 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-409734                       | pause-409734              | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:51 UTC |
	| start   | -p cert-expiration-112531             | cert-expiration-112531    | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:52 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-923531           | force-systemd-env-923531  | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:51 UTC |
	| start   | -p force-systemd-flag-641782          | force-systemd-flag-641782 | jenkins | v1.34.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:52 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-213355 stop           | minikube                  | jenkins | v1.26.0 | 12 Dec 24 00:51 UTC | 12 Dec 24 00:52 UTC |
	| start   | -p stopped-upgrade-213355             | stopped-upgrade-213355    | jenkins | v1.34.0 | 12 Dec 24 00:52 UTC | 12 Dec 24 00:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-459384          | kubernetes-upgrade-459384 | jenkins | v1.34.0 | 12 Dec 24 00:52 UTC | 12 Dec 24 00:52 UTC |
	| start   | -p kubernetes-upgrade-459384          | kubernetes-upgrade-459384 | jenkins | v1.34.0 | 12 Dec 24 00:52 UTC | 12 Dec 24 00:53 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-641782 ssh cat     | force-systemd-flag-641782 | jenkins | v1.34.0 | 12 Dec 24 00:52 UTC | 12 Dec 24 00:52 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-641782          | force-systemd-flag-641782 | jenkins | v1.34.0 | 12 Dec 24 00:52 UTC | 12 Dec 24 00:52 UTC |
	| start   | -p cert-options-000053                | cert-options-000053       | jenkins | v1.34.0 | 12 Dec 24 00:52 UTC | 12 Dec 24 00:53 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-213355             | stopped-upgrade-213355    | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p old-k8s-version-738445             | old-k8s-version-738445    | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-459384          | kubernetes-upgrade-459384 | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-459384          | kubernetes-upgrade-459384 | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:54 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-000053 ssh               | cert-options-000053       | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-000053 -- sudo        | cert-options-000053       | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-000053                | cert-options-000053       | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                  | no-preload-242725         | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:53:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:53:45.476423  138687 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:53:45.476521  138687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:53:45.476529  138687 out.go:358] Setting ErrFile to fd 2...
	I1212 00:53:45.476534  138687 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:53:45.476718  138687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:53:45.477285  138687 out.go:352] Setting JSON to false
	I1212 00:53:45.479017  138687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12967,"bootTime":1733951858,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:53:45.479112  138687 start.go:139] virtualization: kvm guest
	I1212 00:53:45.481420  138687 out.go:177] * [no-preload-242725] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:53:45.482843  138687 notify.go:220] Checking for updates...
	I1212 00:53:45.482859  138687 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:53:45.484264  138687 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:53:45.485686  138687 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:53:45.486845  138687 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:53:45.488011  138687 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:53:45.489200  138687 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:53:45.490856  138687 config.go:182] Loaded profile config "cert-expiration-112531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:53:45.490945  138687 config.go:182] Loaded profile config "kubernetes-upgrade-459384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:53:45.491037  138687 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:53:45.491130  138687 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:53:45.526469  138687 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 00:53:45.527923  138687 start.go:297] selected driver: kvm2
	I1212 00:53:45.527937  138687 start.go:901] validating driver "kvm2" against <nil>
	I1212 00:53:45.527949  138687 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:53:45.528619  138687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.528700  138687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:53:45.543494  138687 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:53:45.543567  138687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1212 00:53:45.543914  138687 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:53:45.543950  138687 cni.go:84] Creating CNI manager for ""
	I1212 00:53:45.543994  138687 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:53:45.544003  138687 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 00:53:45.544053  138687 start.go:340] cluster config:
	{Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:53:45.544158  138687 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.545958  138687 out.go:177] * Starting "no-preload-242725" primary control-plane node in "no-preload-242725" cluster
	I1212 00:53:42.244847  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:42.245424  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:42.245450  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:42.245379  138286 retry.go:31] will retry after 4.523558734s: waiting for machine to come up
	I1212 00:53:46.770593  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.771214  138055 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 00:53:46.771239  138055 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 00:53:46.771253  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.771565  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445
	I1212 00:53:46.849212  138055 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 00:53:46.849244  138055 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 00:53:46.849254  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 00:53:46.852948  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.853378  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:46.853410  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.853557  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 00:53:46.853580  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 00:53:46.853615  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:53:46.853722  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 00:53:46.853738  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 00:53:48.428624  138223 start.go:364] duration metric: took 25.405628286s to acquireMachinesLock for "kubernetes-upgrade-459384"
	I1212 00:53:48.428684  138223 start.go:96] Skipping create...Using existing machine configuration
	I1212 00:53:48.428696  138223 fix.go:54] fixHost starting: 
	I1212 00:53:48.429253  138223 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:53:48.429313  138223 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:53:48.446552  138223 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33765
	I1212 00:53:48.446985  138223 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:53:48.447719  138223 main.go:141] libmachine: Using API Version  1
	I1212 00:53:48.447749  138223 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:53:48.448173  138223 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:53:48.448410  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:48.448587  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetState
	I1212 00:53:48.450335  138223 fix.go:112] recreateIfNeeded on kubernetes-upgrade-459384: state=Running err=<nil>
	W1212 00:53:48.450360  138223 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 00:53:48.452591  138223 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-459384" VM ...
	I1212 00:53:46.984036  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 00:53:46.984310  138055 main.go:141] libmachine: (old-k8s-version-738445) KVM machine creation complete!
	I1212 00:53:46.984647  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 00:53:46.985225  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:46.985426  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:46.985601  138055 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:53:46.985617  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 00:53:46.986967  138055 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:53:46.986983  138055 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:53:46.986991  138055 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:53:46.987000  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:46.989203  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.989575  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:46.989602  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.989763  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:46.989932  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:46.990086  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:46.990282  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:46.990467  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:46.990709  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:46.990723  138055 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:53:47.107578  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:53:47.107628  138055 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:53:47.107641  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.110797  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.111227  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.111260  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.111391  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.111546  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.111687  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.111818  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.112005  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.112216  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.112230  138055 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:53:47.233295  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:53:47.233426  138055 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:53:47.233447  138055 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:53:47.233460  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:47.233695  138055 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 00:53:47.233721  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:47.233873  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.238045  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.238507  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.238530  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.238718  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.239010  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.239194  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.239482  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.241668  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.242037  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.242058  138055 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 00:53:47.381979  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 00:53:47.382017  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.386217  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.386670  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.386699  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.386897  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.387140  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.387382  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.387561  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.387780  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.388010  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.388037  138055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:53:47.526170  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:53:47.526202  138055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:53:47.526247  138055 buildroot.go:174] setting up certificates
	I1212 00:53:47.526263  138055 provision.go:84] configureAuth start
	I1212 00:53:47.526282  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:47.526585  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:47.530840  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.531273  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.531299  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.531509  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.534355  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.534932  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.534999  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.535064  138055 provision.go:143] copyHostCerts
	I1212 00:53:47.535133  138055 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:53:47.535156  138055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:53:47.535217  138055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:53:47.535313  138055 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:53:47.535325  138055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:53:47.535352  138055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:53:47.535402  138055 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:53:47.535409  138055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:53:47.535426  138055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:53:47.535471  138055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 00:53:47.766781  138055 provision.go:177] copyRemoteCerts
	I1212 00:53:47.766863  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:53:47.766903  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.769671  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.770060  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.770092  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.770232  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.770454  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.770634  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.770754  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:47.858160  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:53:47.883236  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 00:53:47.907976  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:53:47.936945  138055 provision.go:87] duration metric: took 410.660837ms to configureAuth
	I1212 00:53:47.936976  138055 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:53:47.937236  138055 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:53:47.937333  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.940083  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.940407  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.940434  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.940603  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.940804  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.940956  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.941081  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.941222  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.941427  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.941444  138055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:53:48.173751  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:53:48.173790  138055 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:53:48.173803  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetURL
	I1212 00:53:48.175045  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using libvirt version 6000000
	I1212 00:53:48.177030  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.177455  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.177492  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.177655  138055 main.go:141] libmachine: Docker is up and running!
	I1212 00:53:48.177672  138055 main.go:141] libmachine: Reticulating splines...
	I1212 00:53:48.177681  138055 client.go:171] duration metric: took 22.863556961s to LocalClient.Create
	I1212 00:53:48.177708  138055 start.go:167] duration metric: took 22.863646729s to libmachine.API.Create "old-k8s-version-738445"
	I1212 00:53:48.177718  138055 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 00:53:48.177728  138055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:53:48.177746  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.177982  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:53:48.178006  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.180328  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.180548  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.180572  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.180700  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.180883  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.181058  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.181225  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:48.266828  138055 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:53:48.271726  138055 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:53:48.271766  138055 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:53:48.271883  138055 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:53:48.272001  138055 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:53:48.272148  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:53:48.282334  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:53:48.307128  138055 start.go:296] duration metric: took 129.393853ms for postStartSetup
	I1212 00:53:48.307191  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 00:53:48.307820  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:48.310377  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.310751  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.310783  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.311055  138055 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:53:48.311308  138055 start.go:128] duration metric: took 23.018665207s to createHost
	I1212 00:53:48.311355  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.313787  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.314135  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.314175  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.314281  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.314461  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.314638  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.314816  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.314979  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:48.315139  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:48.315159  138055 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:53:48.428432  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733964828.396789907
	
	I1212 00:53:48.428467  138055 fix.go:216] guest clock: 1733964828.396789907
	I1212 00:53:48.428477  138055 fix.go:229] Guest: 2024-12-12 00:53:48.396789907 +0000 UTC Remote: 2024-12-12 00:53:48.311327143 +0000 UTC m=+41.451880126 (delta=85.462764ms)
	I1212 00:53:48.428506  138055 fix.go:200] guest clock delta is within tolerance: 85.462764ms
	I1212 00:53:48.428513  138055 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 23.136065357s
	I1212 00:53:48.428543  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.428856  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:48.431413  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.431747  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.431776  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.431988  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.432521  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.432712  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.432830  138055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:53:48.432877  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.432987  138055 ssh_runner.go:195] Run: cat /version.json
	I1212 00:53:48.433045  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.435852  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436042  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436182  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.436224  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436332  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.436362  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.436376  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436571  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.436583  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.436749  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.436756  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.436894  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.436951  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:48.436999  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:48.544837  138055 ssh_runner.go:195] Run: systemctl --version
	I1212 00:53:48.550950  138055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:53:48.713757  138055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:53:48.720651  138055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:53:48.720729  138055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:53:48.739373  138055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:53:48.739399  138055 start.go:495] detecting cgroup driver to use...
	I1212 00:53:48.739472  138055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:53:48.756005  138055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:53:48.771172  138055 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:53:48.771245  138055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:53:48.785436  138055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:53:48.799035  138055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:53:48.918566  138055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:53:49.087209  138055 docker.go:233] disabling docker service ...
	I1212 00:53:49.087299  138055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:53:49.104032  138055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:53:49.117720  138055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:53:49.250570  138055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:53:49.377245  138055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:53:49.394321  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:53:49.414554  138055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 00:53:49.414627  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.426149  138055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:53:49.426218  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.437592  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.450689  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.463493  138055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:53:49.476461  138055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:53:49.487925  138055 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:53:49.487976  138055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:53:49.504080  138055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:53:49.514170  138055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:53:49.629214  138055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:53:49.724919  138055 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:53:49.724994  138055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:53:49.729883  138055 start.go:563] Will wait 60s for crictl version
	I1212 00:53:49.729942  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:49.733837  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:53:49.773542  138055 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:53:49.773634  138055 ssh_runner.go:195] Run: crio --version
	I1212 00:53:49.802300  138055 ssh_runner.go:195] Run: crio --version
	I1212 00:53:49.833253  138055 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 00:53:45.547178  138687 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:53:45.547290  138687 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 00:53:45.547313  138687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json: {Name:mk0ae8050179f952c6ce4af2c5b5e70b695952c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:45.547444  138687 cache.go:107] acquiring lock: {Name:mkfc03a5cac3276ac6044835a94d8a7c632c885c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547463  138687 cache.go:107] acquiring lock: {Name:mk21bf789c53a188fc4320b14eecafc4f360ba0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547478  138687 cache.go:107] acquiring lock: {Name:mka8c652a0e4915a11d6ffddbd43e4ff00e8c226 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547514  138687 cache.go:107] acquiring lock: {Name:mk1f106761be38bf48c8c83be9bc727b22b32c48 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547471  138687 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:53:45.547490  138687 cache.go:107] acquiring lock: {Name:mkfd3d11b8923dfcbfeabe9da00815e4518691ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547621  138687 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 00:53:45.547660  138687 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 00:53:45.547627  138687 cache.go:107] acquiring lock: {Name:mk9d6b57ab6a0fa6a20366705cff5457b37ca436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547688  138687 cache.go:115] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1212 00:53:45.547726  138687 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 272.584µs
	I1212 00:53:45.547745  138687 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1212 00:53:45.547627  138687 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 00:53:45.547829  138687 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 00:53:45.547644  138687 cache.go:107] acquiring lock: {Name:mk4bbd80f8ef04cc86b570f91b832ad87e14c932 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.547695  138687 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 00:53:45.547972  138687 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 00:53:45.547649  138687 cache.go:107] acquiring lock: {Name:mk6309693c14b4a7dbdafae561c3e6b6c315c5f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:45.548078  138687 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 00:53:45.548896  138687 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 00:53:45.548948  138687 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 00:53:45.548973  138687 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 00:53:45.548972  138687 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 00:53:45.549030  138687 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 00:53:45.549050  138687 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 00:53:45.549056  138687 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 00:53:45.761562  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I1212 00:53:45.807887  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 00:53:45.849906  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I1212 00:53:45.849936  138687 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 302.471115ms
	I1212 00:53:45.849956  138687 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I1212 00:53:45.870218  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 00:53:45.898257  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 00:53:45.901320  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 00:53:45.902060  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 00:53:45.978547  138687 cache.go:162] opening:  /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 00:53:46.165813  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1212 00:53:46.165839  138687 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2" took 618.254255ms
	I1212 00:53:46.165851  138687 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1212 00:53:47.424695  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1212 00:53:47.424732  138687 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2" took 1.877169918s
	I1212 00:53:47.424750  138687 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1212 00:53:47.591123  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1212 00:53:47.591156  138687 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.043561569s
	I1212 00:53:47.591172  138687 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1212 00:53:47.598726  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1212 00:53:47.598751  138687 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2" took 2.051326683s
	I1212 00:53:47.598761  138687 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1212 00:53:47.608824  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1212 00:53:47.608850  138687 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2" took 2.061378085s
	I1212 00:53:47.608861  138687 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1212 00:53:47.771776  138687 cache.go:157] /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I1212 00:53:47.771799  138687 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.224287174s
	I1212 00:53:47.771810  138687 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1212 00:53:47.771825  138687 cache.go:87] Successfully saved all images to host disk.
	I1212 00:53:49.834544  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:49.837424  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:49.837750  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:49.837779  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:49.837961  138055 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 00:53:49.842269  138055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:53:49.855177  138055 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:53:49.855291  138055 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:53:49.855358  138055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:53:49.888520  138055 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 00:53:49.888582  138055 ssh_runner.go:195] Run: which lz4
	I1212 00:53:49.892632  138055 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 00:53:49.896990  138055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:53:49.897022  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 00:53:51.546417  138055 crio.go:462] duration metric: took 1.653819988s to copy over tarball
	I1212 00:53:51.546504  138055 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:53:48.453852  138223 machine.go:93] provisionDockerMachine start ...
	I1212 00:53:48.453881  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:48.454099  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:48.457074  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.457545  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:48.457592  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.457720  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:48.457912  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:48.458073  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:48.458217  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:48.458357  138223 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:48.458598  138223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:53:48.458611  138223 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 00:53:48.572589  138223 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-459384
	
	I1212 00:53:48.572627  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:53:48.572909  138223 buildroot.go:166] provisioning hostname "kubernetes-upgrade-459384"
	I1212 00:53:48.572945  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:53:48.573152  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:48.576300  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.576652  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:48.576697  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.576826  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:48.577032  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:48.577221  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:48.577393  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:48.577590  138223 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:48.577797  138223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:53:48.577814  138223 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-459384 && echo "kubernetes-upgrade-459384" | sudo tee /etc/hostname
	I1212 00:53:48.709927  138223 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-459384
	
	I1212 00:53:48.709966  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:48.712668  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.712956  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:48.712989  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.713134  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:48.713349  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:48.713533  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:48.713727  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:48.713922  138223 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:48.714154  138223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:53:48.714181  138223 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-459384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-459384/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-459384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:53:48.828590  138223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:53:48.828625  138223 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:53:48.828681  138223 buildroot.go:174] setting up certificates
	I1212 00:53:48.828695  138223 provision.go:84] configureAuth start
	I1212 00:53:48.828714  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetMachineName
	I1212 00:53:48.828984  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:53:48.832137  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.832580  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:48.832610  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.832884  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:48.835536  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.835952  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:48.835985  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:48.836147  138223 provision.go:143] copyHostCerts
	I1212 00:53:48.836213  138223 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:53:48.836238  138223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:53:48.836317  138223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:53:48.836443  138223 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:53:48.836454  138223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:53:48.836487  138223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:53:48.836576  138223 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:53:48.836586  138223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:53:48.836615  138223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:53:48.836698  138223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-459384 san=[127.0.0.1 192.168.50.209 kubernetes-upgrade-459384 localhost minikube]
	I1212 00:53:49.038230  138223 provision.go:177] copyRemoteCerts
	I1212 00:53:49.038320  138223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:53:49.038357  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:49.041305  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:49.041690  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:49.041719  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:49.041916  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:49.042114  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:49.042242  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:49.042377  138223 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:53:49.126541  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 00:53:49.154915  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:53:49.187658  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:53:49.215959  138223 provision.go:87] duration metric: took 387.243491ms to configureAuth
	I1212 00:53:49.215999  138223 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:53:49.216216  138223 config.go:182] Loaded profile config "kubernetes-upgrade-459384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:53:49.216294  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:49.219358  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:49.219779  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:49.219810  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:49.220030  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:49.220240  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:49.220418  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:49.220563  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:49.220718  138223 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:49.220888  138223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:53:49.220902  138223 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:53:55.605109  138687 start.go:364] duration metric: took 10.057502076s to acquireMachinesLock for "no-preload-242725"
	I1212 00:53:55.605193  138687 start.go:93] Provisioning new machine with config: &{Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:53:55.605324  138687 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 00:53:54.097580  138055 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551044087s)
	I1212 00:53:54.097608  138055 crio.go:469] duration metric: took 2.551162212s to extract the tarball
	I1212 00:53:54.097616  138055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:53:54.139503  138055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:53:54.186743  138055 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 00:53:54.186772  138055 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:53:54.186859  138055 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.186916  138055 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.186929  138055 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.186859  138055 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:53:54.186877  138055 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.186869  138055 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 00:53:54.186862  138055 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.186949  138055 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.188362  138055 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.188397  138055 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.188359  138055 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.188512  138055 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:53:54.188588  138055 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 00:53:54.188661  138055 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.188727  138055 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.188817  138055 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.350701  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.350835  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.352355  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.366836  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.366880  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.391857  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.417014  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 00:53:54.484105  138055 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 00:53:54.484151  138055 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.484158  138055 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 00:53:54.484192  138055 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.484203  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.484236  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.513386  138055 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 00:53:54.513439  138055 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.513492  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.556947  138055 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 00:53:54.557037  138055 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.556956  138055 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 00:53:54.557097  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.557146  138055 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.557216  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.573431  138055 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 00:53:54.573475  138055 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 00:53:54.573487  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.573503  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.573545  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.573568  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.573548  138055 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 00:53:54.573617  138055 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.573643  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.573689  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.573707  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.713461  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.713542  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:53:54.713582  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.713605  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.713658  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.713661  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.713720  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.882997  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.900765  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.900819  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:53:54.900851  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.900909  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.900937  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.901031  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:55.054624  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 00:53:55.083875  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 00:53:55.083977  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 00:53:55.084032  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 00:53:55.084056  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 00:53:55.084131  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:55.084205  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:53:55.138532  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 00:53:55.138561  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 00:53:56.483700  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:53:56.626509  138055 cache_images.go:92] duration metric: took 2.439719978s to LoadCachedImages
	W1212 00:53:56.626604  138055 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1212 00:53:56.626622  138055 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 00:53:56.626754  138055 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:53:56.626849  138055 ssh_runner.go:195] Run: crio config
	I1212 00:53:56.682634  138055 cni.go:84] Creating CNI manager for ""
	I1212 00:53:56.682657  138055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:53:56.682666  138055 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:53:56.682685  138055 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 00:53:56.682819  138055 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:53:56.682874  138055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 00:53:56.693189  138055 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:53:56.693256  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:53:56.703035  138055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 00:53:56.721213  138055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:53:56.740475  138055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 00:53:56.761205  138055 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 00:53:56.765570  138055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:53:56.778588  138055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:53:56.892049  138055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:53:56.910148  138055 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 00:53:56.910182  138055 certs.go:194] generating shared ca certs ...
	I1212 00:53:56.910205  138055 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:56.910411  138055 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:53:56.910474  138055 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:53:56.910488  138055 certs.go:256] generating profile certs ...
	I1212 00:53:56.910575  138055 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 00:53:56.910595  138055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt with IP's: []
	I1212 00:53:55.365091  138223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:53:55.365120  138223 machine.go:96] duration metric: took 6.911248972s to provisionDockerMachine
	I1212 00:53:55.365134  138223 start.go:293] postStartSetup for "kubernetes-upgrade-459384" (driver="kvm2")
	I1212 00:53:55.365146  138223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:53:55.365167  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:55.365525  138223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:53:55.365556  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:55.368562  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.368926  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:55.368957  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.369107  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:55.369297  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:55.369479  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:55.369663  138223 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:53:55.454205  138223 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:53:55.458419  138223 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:53:55.458444  138223 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:53:55.458499  138223 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:53:55.458567  138223 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:53:55.458664  138223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:53:55.468054  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:53:55.492018  138223 start.go:296] duration metric: took 126.853837ms for postStartSetup
	I1212 00:53:55.492061  138223 fix.go:56] duration metric: took 7.063365895s for fixHost
	I1212 00:53:55.492085  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:55.494761  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.495096  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:55.495129  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.495297  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:55.495488  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:55.495665  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:55.495791  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:55.495918  138223 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:55.496112  138223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.209 22 <nil> <nil>}
	I1212 00:53:55.496125  138223 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:53:55.604924  138223 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733964835.601245219
	
	I1212 00:53:55.604950  138223 fix.go:216] guest clock: 1733964835.601245219
	I1212 00:53:55.604956  138223 fix.go:229] Guest: 2024-12-12 00:53:55.601245219 +0000 UTC Remote: 2024-12-12 00:53:55.492064733 +0000 UTC m=+32.613165489 (delta=109.180486ms)
	I1212 00:53:55.605013  138223 fix.go:200] guest clock delta is within tolerance: 109.180486ms
	I1212 00:53:55.605019  138223 start.go:83] releasing machines lock for "kubernetes-upgrade-459384", held for 7.176363598s
	I1212 00:53:55.605044  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:55.605337  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:53:55.608293  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.608707  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:55.608737  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.608917  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:55.609573  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:55.609743  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .DriverName
	I1212 00:53:55.609856  138223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:53:55.609901  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:55.609988  138223 ssh_runner.go:195] Run: cat /version.json
	I1212 00:53:55.610027  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHHostname
	I1212 00:53:55.612968  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.613162  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.613410  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:55.613440  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.613531  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:55.613627  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:55.613651  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:55.613703  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:55.613836  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHPort
	I1212 00:53:55.613911  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:55.613979  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHKeyPath
	I1212 00:53:55.614035  138223 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:53:55.614128  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetSSHUsername
	I1212 00:53:55.614267  138223 sshutil.go:53] new ssh client: &{IP:192.168.50.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kubernetes-upgrade-459384/id_rsa Username:docker}
	I1212 00:53:55.724536  138223 ssh_runner.go:195] Run: systemctl --version
	I1212 00:53:55.731583  138223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:53:55.898871  138223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:53:55.910361  138223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:53:55.910420  138223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:53:55.922142  138223 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 00:53:55.922171  138223 start.go:495] detecting cgroup driver to use...
	I1212 00:53:55.922233  138223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:53:55.942741  138223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:53:55.958479  138223 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:53:55.958546  138223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:53:55.976254  138223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:53:55.991747  138223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:53:56.125909  138223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:53:56.260551  138223 docker.go:233] disabling docker service ...
	I1212 00:53:56.260648  138223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:53:56.278469  138223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:53:56.292480  138223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:53:56.430536  138223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:53:56.568038  138223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:53:56.582984  138223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:53:56.604425  138223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 00:53:56.604487  138223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.615254  138223 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:53:56.615317  138223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.627992  138223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.639037  138223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.649499  138223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:53:56.661083  138223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.671953  138223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.686456  138223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:56.697167  138223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:53:56.707079  138223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:53:56.717547  138223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:53:56.880557  138223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:53:56.989886  138055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt ...
	I1212 00:53:56.989917  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: {Name:mk064b3664718e7760e88dca13a69b246be0893f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:56.990097  138055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key ...
	I1212 00:53:56.990111  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key: {Name:mkb62938ccd86d2225dee25ce299c41f9d999785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:56.990217  138055 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 00:53:56.990240  138055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.25]
	I1212 00:53:57.071943  138055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55 ...
	I1212 00:53:57.071972  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55: {Name:mkba07fa4d8014c68b7a351d7951c9385e1e2619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.115089  138055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55 ...
	I1212 00:53:57.115149  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55: {Name:mkfa1a5241546321a9fa119856def155ee10bc45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.115319  138055 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt
	I1212 00:53:57.115426  138055 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key
	I1212 00:53:57.115496  138055 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 00:53:57.115517  138055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt with IP's: []
	I1212 00:53:57.382065  138055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt ...
	I1212 00:53:57.382097  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt: {Name:mkc721b7ebf5c96aa6a76ab3eee8bcbae84cf792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.382294  138055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key ...
	I1212 00:53:57.382312  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key: {Name:mk706ec48e68acb47ca47e2e900c95b77c78f8df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.382667  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:53:57.382717  138055 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:53:57.382724  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:53:57.382748  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:53:57.382771  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:53:57.382788  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:53:57.382826  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:53:57.383483  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:53:57.414973  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:53:57.443818  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:53:57.471344  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:53:57.498234  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 00:53:57.541688  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:53:57.573330  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:53:57.600018  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:53:57.626179  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:53:57.651312  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:53:57.675818  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:53:57.703345  138055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:53:57.728172  138055 ssh_runner.go:195] Run: openssl version
	I1212 00:53:57.740060  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:53:57.754964  138055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:57.761914  138055 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:57.761996  138055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:57.772481  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:53:57.787053  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:53:57.802538  138055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:53:57.808498  138055 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:53:57.808570  138055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:53:57.821217  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:53:57.836671  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:53:57.851342  138055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:53:57.856428  138055 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:53:57.856497  138055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:53:57.862582  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:53:57.877439  138055 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:53:57.882106  138055 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:53:57.882166  138055 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:53:57.882250  138055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:53:57.882317  138055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:53:57.923013  138055 cri.go:89] found id: ""
	I1212 00:53:57.923105  138055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:53:57.933758  138055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:53:57.944564  138055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:53:57.955148  138055 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:53:57.955175  138055 kubeadm.go:157] found existing configuration files:
	
	I1212 00:53:57.955220  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:53:57.965723  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:53:57.965793  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:53:57.977012  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:53:57.988459  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:53:57.988530  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:53:58.000510  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:53:58.012000  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:53:58.012054  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:53:58.025206  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:53:58.037105  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:53:58.037166  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:53:58.052079  138055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 00:53:58.179400  138055 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 00:53:58.179494  138055 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 00:53:58.331809  138055 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:53:58.331978  138055 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:53:58.332140  138055 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:53:58.532156  138055 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:53:58.659021  138223 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.778424834s)
	I1212 00:53:58.659056  138223 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:53:58.659123  138223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:53:58.670354  138223 start.go:563] Will wait 60s for crictl version
	I1212 00:53:58.670418  138223 ssh_runner.go:195] Run: which crictl
	I1212 00:53:58.676163  138223 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:53:58.726813  138223 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:53:58.726907  138223 ssh_runner.go:195] Run: crio --version
	I1212 00:53:58.765002  138223 ssh_runner.go:195] Run: crio --version
	I1212 00:53:58.803916  138223 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 00:53:55.608129  138687 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:53:55.608308  138687 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:53:55.608343  138687 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:53:55.628289  138687 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
	I1212 00:53:55.628873  138687 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:53:55.629478  138687 main.go:141] libmachine: Using API Version  1
	I1212 00:53:55.629509  138687 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:53:55.629872  138687 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:53:55.630042  138687 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 00:53:55.630184  138687 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 00:53:55.630317  138687 start.go:159] libmachine.API.Create for "no-preload-242725" (driver="kvm2")
	I1212 00:53:55.630349  138687 client.go:168] LocalClient.Create starting
	I1212 00:53:55.630387  138687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:53:55.630427  138687 main.go:141] libmachine: Decoding PEM data...
	I1212 00:53:55.630446  138687 main.go:141] libmachine: Parsing certificate...
	I1212 00:53:55.630517  138687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:53:55.630553  138687 main.go:141] libmachine: Decoding PEM data...
	I1212 00:53:55.630570  138687 main.go:141] libmachine: Parsing certificate...
	I1212 00:53:55.630594  138687 main.go:141] libmachine: Running pre-create checks...
	I1212 00:53:55.630607  138687 main.go:141] libmachine: (no-preload-242725) Calling .PreCreateCheck
	I1212 00:53:55.630988  138687 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 00:53:55.631408  138687 main.go:141] libmachine: Creating machine...
	I1212 00:53:55.631421  138687 main.go:141] libmachine: (no-preload-242725) Calling .Create
	I1212 00:53:55.631579  138687 main.go:141] libmachine: (no-preload-242725) Creating KVM machine...
	I1212 00:53:55.632904  138687 main.go:141] libmachine: (no-preload-242725) DBG | found existing default KVM network
	I1212 00:53:55.634584  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:55.634388  138806 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:57:bc} reservation:<nil>}
	I1212 00:53:55.635641  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:55.635532  138806 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:08:2e:41} reservation:<nil>}
	I1212 00:53:55.636905  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:55.636815  138806 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380760}
	I1212 00:53:55.636941  138687 main.go:141] libmachine: (no-preload-242725) DBG | created network xml: 
	I1212 00:53:55.636957  138687 main.go:141] libmachine: (no-preload-242725) DBG | <network>
	I1212 00:53:55.636969  138687 main.go:141] libmachine: (no-preload-242725) DBG |   <name>mk-no-preload-242725</name>
	I1212 00:53:55.636996  138687 main.go:141] libmachine: (no-preload-242725) DBG |   <dns enable='no'/>
	I1212 00:53:55.637017  138687 main.go:141] libmachine: (no-preload-242725) DBG |   
	I1212 00:53:55.637034  138687 main.go:141] libmachine: (no-preload-242725) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1212 00:53:55.637045  138687 main.go:141] libmachine: (no-preload-242725) DBG |     <dhcp>
	I1212 00:53:55.637056  138687 main.go:141] libmachine: (no-preload-242725) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1212 00:53:55.637070  138687 main.go:141] libmachine: (no-preload-242725) DBG |     </dhcp>
	I1212 00:53:55.637089  138687 main.go:141] libmachine: (no-preload-242725) DBG |   </ip>
	I1212 00:53:55.637106  138687 main.go:141] libmachine: (no-preload-242725) DBG |   
	I1212 00:53:55.637116  138687 main.go:141] libmachine: (no-preload-242725) DBG | </network>
	I1212 00:53:55.637128  138687 main.go:141] libmachine: (no-preload-242725) DBG | 
	I1212 00:53:55.642322  138687 main.go:141] libmachine: (no-preload-242725) DBG | trying to create private KVM network mk-no-preload-242725 192.168.61.0/24...
	I1212 00:53:55.716456  138687 main.go:141] libmachine: (no-preload-242725) DBG | private KVM network mk-no-preload-242725 192.168.61.0/24 created
	I1212 00:53:55.716492  138687 main.go:141] libmachine: (no-preload-242725) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725 ...
	I1212 00:53:55.716511  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:55.716443  138806 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:53:55.716530  138687 main.go:141] libmachine: (no-preload-242725) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:53:55.716664  138687 main.go:141] libmachine: (no-preload-242725) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:53:55.973063  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:55.972925  138806 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa...
	I1212 00:53:56.247346  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:56.247213  138806 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/no-preload-242725.rawdisk...
	I1212 00:53:56.247389  138687 main.go:141] libmachine: (no-preload-242725) DBG | Writing magic tar header
	I1212 00:53:56.247410  138687 main.go:141] libmachine: (no-preload-242725) DBG | Writing SSH key tar header
	I1212 00:53:56.247422  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:56.247363  138806 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725 ...
	I1212 00:53:56.247503  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725
	I1212 00:53:56.247523  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:53:56.247538  138687 main.go:141] libmachine: (no-preload-242725) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725 (perms=drwx------)
	I1212 00:53:56.247557  138687 main.go:141] libmachine: (no-preload-242725) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:53:56.247623  138687 main.go:141] libmachine: (no-preload-242725) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:53:56.247653  138687 main.go:141] libmachine: (no-preload-242725) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:53:56.247667  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:53:56.247686  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:53:56.247698  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:53:56.247727  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:53:56.247738  138687 main.go:141] libmachine: (no-preload-242725) DBG | Checking permissions on dir: /home
	I1212 00:53:56.247750  138687 main.go:141] libmachine: (no-preload-242725) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:53:56.247764  138687 main.go:141] libmachine: (no-preload-242725) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:53:56.247774  138687 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 00:53:56.247793  138687 main.go:141] libmachine: (no-preload-242725) DBG | Skipping /home - not owner
	I1212 00:53:56.248994  138687 main.go:141] libmachine: (no-preload-242725) define libvirt domain using xml: 
	I1212 00:53:56.249020  138687 main.go:141] libmachine: (no-preload-242725) <domain type='kvm'>
	I1212 00:53:56.249054  138687 main.go:141] libmachine: (no-preload-242725)   <name>no-preload-242725</name>
	I1212 00:53:56.249087  138687 main.go:141] libmachine: (no-preload-242725)   <memory unit='MiB'>2200</memory>
	I1212 00:53:56.249096  138687 main.go:141] libmachine: (no-preload-242725)   <vcpu>2</vcpu>
	I1212 00:53:56.249111  138687 main.go:141] libmachine: (no-preload-242725)   <features>
	I1212 00:53:56.249121  138687 main.go:141] libmachine: (no-preload-242725)     <acpi/>
	I1212 00:53:56.249128  138687 main.go:141] libmachine: (no-preload-242725)     <apic/>
	I1212 00:53:56.249136  138687 main.go:141] libmachine: (no-preload-242725)     <pae/>
	I1212 00:53:56.249145  138687 main.go:141] libmachine: (no-preload-242725)     
	I1212 00:53:56.249152  138687 main.go:141] libmachine: (no-preload-242725)   </features>
	I1212 00:53:56.249160  138687 main.go:141] libmachine: (no-preload-242725)   <cpu mode='host-passthrough'>
	I1212 00:53:56.249172  138687 main.go:141] libmachine: (no-preload-242725)   
	I1212 00:53:56.249178  138687 main.go:141] libmachine: (no-preload-242725)   </cpu>
	I1212 00:53:56.249190  138687 main.go:141] libmachine: (no-preload-242725)   <os>
	I1212 00:53:56.249200  138687 main.go:141] libmachine: (no-preload-242725)     <type>hvm</type>
	I1212 00:53:56.249209  138687 main.go:141] libmachine: (no-preload-242725)     <boot dev='cdrom'/>
	I1212 00:53:56.249218  138687 main.go:141] libmachine: (no-preload-242725)     <boot dev='hd'/>
	I1212 00:53:56.249226  138687 main.go:141] libmachine: (no-preload-242725)     <bootmenu enable='no'/>
	I1212 00:53:56.249235  138687 main.go:141] libmachine: (no-preload-242725)   </os>
	I1212 00:53:56.249243  138687 main.go:141] libmachine: (no-preload-242725)   <devices>
	I1212 00:53:56.249254  138687 main.go:141] libmachine: (no-preload-242725)     <disk type='file' device='cdrom'>
	I1212 00:53:56.249268  138687 main.go:141] libmachine: (no-preload-242725)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/boot2docker.iso'/>
	I1212 00:53:56.249279  138687 main.go:141] libmachine: (no-preload-242725)       <target dev='hdc' bus='scsi'/>
	I1212 00:53:56.249288  138687 main.go:141] libmachine: (no-preload-242725)       <readonly/>
	I1212 00:53:56.249296  138687 main.go:141] libmachine: (no-preload-242725)     </disk>
	I1212 00:53:56.249319  138687 main.go:141] libmachine: (no-preload-242725)     <disk type='file' device='disk'>
	I1212 00:53:56.249332  138687 main.go:141] libmachine: (no-preload-242725)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:53:56.249351  138687 main.go:141] libmachine: (no-preload-242725)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/no-preload-242725.rawdisk'/>
	I1212 00:53:56.249362  138687 main.go:141] libmachine: (no-preload-242725)       <target dev='hda' bus='virtio'/>
	I1212 00:53:56.249370  138687 main.go:141] libmachine: (no-preload-242725)     </disk>
	I1212 00:53:56.249381  138687 main.go:141] libmachine: (no-preload-242725)     <interface type='network'>
	I1212 00:53:56.249390  138687 main.go:141] libmachine: (no-preload-242725)       <source network='mk-no-preload-242725'/>
	I1212 00:53:56.249400  138687 main.go:141] libmachine: (no-preload-242725)       <model type='virtio'/>
	I1212 00:53:56.249409  138687 main.go:141] libmachine: (no-preload-242725)     </interface>
	I1212 00:53:56.249419  138687 main.go:141] libmachine: (no-preload-242725)     <interface type='network'>
	I1212 00:53:56.249427  138687 main.go:141] libmachine: (no-preload-242725)       <source network='default'/>
	I1212 00:53:56.249434  138687 main.go:141] libmachine: (no-preload-242725)       <model type='virtio'/>
	I1212 00:53:56.249441  138687 main.go:141] libmachine: (no-preload-242725)     </interface>
	I1212 00:53:56.249447  138687 main.go:141] libmachine: (no-preload-242725)     <serial type='pty'>
	I1212 00:53:56.249456  138687 main.go:141] libmachine: (no-preload-242725)       <target port='0'/>
	I1212 00:53:56.249466  138687 main.go:141] libmachine: (no-preload-242725)     </serial>
	I1212 00:53:56.249473  138687 main.go:141] libmachine: (no-preload-242725)     <console type='pty'>
	I1212 00:53:56.249484  138687 main.go:141] libmachine: (no-preload-242725)       <target type='serial' port='0'/>
	I1212 00:53:56.249493  138687 main.go:141] libmachine: (no-preload-242725)     </console>
	I1212 00:53:56.249503  138687 main.go:141] libmachine: (no-preload-242725)     <rng model='virtio'>
	I1212 00:53:56.249514  138687 main.go:141] libmachine: (no-preload-242725)       <backend model='random'>/dev/random</backend>
	I1212 00:53:56.249524  138687 main.go:141] libmachine: (no-preload-242725)     </rng>
	I1212 00:53:56.249532  138687 main.go:141] libmachine: (no-preload-242725)     
	I1212 00:53:56.249541  138687 main.go:141] libmachine: (no-preload-242725)     
	I1212 00:53:56.249550  138687 main.go:141] libmachine: (no-preload-242725)   </devices>
	I1212 00:53:56.249559  138687 main.go:141] libmachine: (no-preload-242725) </domain>
	I1212 00:53:56.249569  138687 main.go:141] libmachine: (no-preload-242725) 
	I1212 00:53:56.336394  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:79:59:de in network default
	I1212 00:53:56.337104  138687 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 00:53:56.337141  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:53:56.337822  138687 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 00:53:56.338253  138687 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 00:53:56.338900  138687 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 00:53:56.339893  138687 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 00:53:58.408861  138687 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 00:53:58.409558  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:53:58.410113  138687 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 00:53:58.410152  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:58.410080  138806 retry.go:31] will retry after 246.413049ms: waiting for machine to come up
	I1212 00:53:58.658410  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:53:58.659013  138687 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 00:53:58.659036  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:58.658977  138806 retry.go:31] will retry after 294.893085ms: waiting for machine to come up
	I1212 00:53:58.955688  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:53:58.956301  138687 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 00:53:58.956340  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:58.956231  138806 retry.go:31] will retry after 337.757465ms: waiting for machine to come up
	I1212 00:53:59.295571  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:53:59.296102  138687 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 00:53:59.296128  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:59.296049  138806 retry.go:31] will retry after 449.424691ms: waiting for machine to come up
	I1212 00:53:59.746749  138687 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:53:59.747319  138687 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 00:53:59.747350  138687 main.go:141] libmachine: (no-preload-242725) DBG | I1212 00:53:59.747270  138806 retry.go:31] will retry after 735.70066ms: waiting for machine to come up
	I1212 00:53:58.533778  138055 out.go:235]   - Generating certificates and keys ...
	I1212 00:53:58.533892  138055 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 00:53:58.533984  138055 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 00:53:58.764206  138055 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:53:58.866607  138055 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:53:59.067523  138055 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:53:59.239198  138055 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1212 00:53:59.429160  138055 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1212 00:53:59.429480  138055 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	I1212 00:53:59.756852  138055 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1212 00:53:59.757250  138055 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	I1212 00:53:59.972095  138055 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:54:00.105563  138055 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:54:00.290134  138055 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1212 00:54:00.290446  138055 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:54:00.389600  138055 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:54:00.909404  138055 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:54:01.241855  138055 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:54:01.336918  138055 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:54:01.367914  138055 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:54:01.369603  138055 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:54:01.369722  138055 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 00:54:01.538222  138055 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:54:01.540331  138055 out.go:235]   - Booting up control plane ...
	I1212 00:54:01.540473  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:54:01.552275  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:54:01.553790  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:54:01.555130  138055 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:54:01.560532  138055 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:53:58.805449  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) Calling .GetIP
	I1212 00:53:58.809016  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:58.809432  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:4f:45", ip: ""} in network mk-kubernetes-upgrade-459384: {Iface:virbr2 ExpiryTime:2024-12-12 01:52:51 +0000 UTC Type:0 Mac:52:54:00:fb:4f:45 Iaid: IPaddr:192.168.50.209 Prefix:24 Hostname:kubernetes-upgrade-459384 Clientid:01:52:54:00:fb:4f:45}
	I1212 00:53:58.809464  138223 main.go:141] libmachine: (kubernetes-upgrade-459384) DBG | domain kubernetes-upgrade-459384 has defined IP address 192.168.50.209 and MAC address 52:54:00:fb:4f:45 in network mk-kubernetes-upgrade-459384
	I1212 00:53:58.809740  138223 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 00:53:58.814640  138223 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-459384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.209 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:53:58.814809  138223 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 00:53:58.814872  138223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:53:58.864881  138223 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:53:58.864907  138223 crio.go:433] Images already preloaded, skipping extraction
	I1212 00:53:58.864955  138223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:53:58.904639  138223 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 00:53:58.904671  138223 cache_images.go:84] Images are preloaded, skipping loading
	I1212 00:53:58.904680  138223 kubeadm.go:934] updating node { 192.168.50.209 8443 v1.31.2 crio true true} ...
	I1212 00:53:58.904805  138223 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-459384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:53:58.904869  138223 ssh_runner.go:195] Run: crio config
	I1212 00:53:58.957208  138223 cni.go:84] Creating CNI manager for ""
	I1212 00:53:58.957231  138223 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:53:58.957244  138223 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:53:58.957274  138223 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.209 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-459384 NodeName:kubernetes-upgrade-459384 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 00:53:58.957440  138223 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-459384"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.209"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.209"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:53:58.957526  138223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 00:53:58.969173  138223 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:53:58.969259  138223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:53:58.980342  138223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1212 00:53:58.998780  138223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:53:59.018293  138223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1212 00:53:59.037065  138223 ssh_runner.go:195] Run: grep 192.168.50.209	control-plane.minikube.internal$ /etc/hosts
	I1212 00:53:59.041341  138223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:53:59.192227  138223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:53:59.208166  138223 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384 for IP: 192.168.50.209
	I1212 00:53:59.208192  138223 certs.go:194] generating shared ca certs ...
	I1212 00:53:59.208212  138223 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:59.208404  138223 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:53:59.208460  138223 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:53:59.208474  138223 certs.go:256] generating profile certs ...
	I1212 00:53:59.208572  138223 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/client.key
	I1212 00:53:59.208661  138223 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key.23d5d1c4
	I1212 00:53:59.208718  138223 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.key
	I1212 00:53:59.208867  138223 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:53:59.208904  138223 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:53:59.208918  138223 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:53:59.208954  138223 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:53:59.208988  138223 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:53:59.209017  138223 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:53:59.209076  138223 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:53:59.209953  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:53:59.243493  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:53:59.277733  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:53:59.312158  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:53:59.344024  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 00:53:59.377105  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 00:53:59.409915  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:53:59.438552  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kubernetes-upgrade-459384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:53:59.472394  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:53:59.502614  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:53:59.533838  138223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:53:59.564460  138223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:53:59.587124  138223 ssh_runner.go:195] Run: openssl version
	I1212 00:53:59.593904  138223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:53:59.606474  138223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:53:59.611546  138223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:53:59.611627  138223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:53:59.618395  138223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:53:59.632291  138223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:53:59.643964  138223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:53:59.658069  138223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:53:59.658149  138223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:53:59.666529  138223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:53:59.676707  138223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:53:59.689228  138223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:59.694364  138223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:59.694437  138223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:59.702075  138223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:53:59.713238  138223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:53:59.719803  138223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 00:53:59.728031  138223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 00:53:59.734193  138223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 00:53:59.740293  138223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 00:53:59.746545  138223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 00:53:59.753418  138223 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 00:53:59.759924  138223 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-459384 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-459384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.209 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:53:59.760025  138223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:53:59.760084  138223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:53:59.803235  138223 cri.go:89] found id: "6fe9f7ea509b252dfbabb59c441e5b4ff121a73776145466c7c7e9c70c17b645"
	I1212 00:53:59.803262  138223 cri.go:89] found id: "4973e32994088ffda4aa2856c70054a1ef93b0fd64a01edfffbb12e2efad9ef4"
	I1212 00:53:59.803272  138223 cri.go:89] found id: "d04d2317b03f31a0786fbb45eae744f4fcab331fcd043b1531428d3f9b398773"
	I1212 00:53:59.803277  138223 cri.go:89] found id: "89fc3286b61fc334794c1221c3e30b8fbd7fa8554e2a1253372259daa12cd489"
	I1212 00:53:59.803282  138223 cri.go:89] found id: "bfa9d5f60b80b6f3a8f1a742a1b6011f131729e9b3ba0429e5cf0673ebdf9998"
	I1212 00:53:59.803287  138223 cri.go:89] found id: "e028a7cdd87bc33c5a74e4677fa13a300a69f9f2235641d90ebe49b21d884ccb"
	I1212 00:53:59.803291  138223 cri.go:89] found id: "ef672184db832b1cd77d70aa660e0601a8f7c64aa6218c33ecf9be8eb6f9f9e5"
	I1212 00:53:59.803295  138223 cri.go:89] found id: "44f92e1ad4fd7dcfef7171f506ee6f40dac94306d3799f064b7aeba9e1885cd8"
	I1212 00:53:59.803299  138223 cri.go:89] found id: ""
	I1212 00:53:59.803358  138223 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-459384 -n kubernetes-upgrade-459384
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-459384 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-459384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-459384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-459384: (1.138499137s)
--- FAIL: TestKubernetesUpgrade (411.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (288.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-738445 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-738445 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m48.587320526s)

                                                
                                                
-- stdout --
	* [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:53:06.910897  138055 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:53:06.911021  138055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:53:06.911032  138055 out.go:358] Setting ErrFile to fd 2...
	I1212 00:53:06.911040  138055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:53:06.911236  138055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:53:06.911923  138055 out.go:352] Setting JSON to false
	I1212 00:53:06.913049  138055 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12929,"bootTime":1733951858,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:53:06.913189  138055 start.go:139] virtualization: kvm guest
	I1212 00:53:07.002125  138055 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:53:07.228340  138055 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:53:07.228362  138055 notify.go:220] Checking for updates...
	I1212 00:53:07.595968  138055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:53:07.707714  138055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:53:07.838999  138055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:53:07.968640  138055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:53:08.114027  138055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:53:08.248443  138055 config.go:182] Loaded profile config "cert-expiration-112531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:53:08.248610  138055 config.go:182] Loaded profile config "cert-options-000053": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:53:08.248715  138055 config.go:182] Loaded profile config "kubernetes-upgrade-459384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:53:08.248855  138055 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:53:08.378394  138055 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 00:53:08.379899  138055 start.go:297] selected driver: kvm2
	I1212 00:53:08.379916  138055 start.go:901] validating driver "kvm2" against <nil>
	I1212 00:53:08.379952  138055 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:53:08.381014  138055 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:08.381134  138055 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:53:08.397649  138055 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:53:08.397703  138055 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1212 00:53:08.398019  138055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:53:08.398067  138055 cni.go:84] Creating CNI manager for ""
	I1212 00:53:08.398142  138055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:53:08.398155  138055 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 00:53:08.398242  138055 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:53:08.398403  138055 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:53:08.400669  138055 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:53:08.401807  138055 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:53:08.401863  138055 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:53:08.401878  138055 cache.go:56] Caching tarball of preloaded images
	I1212 00:53:08.401961  138055 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:53:08.401975  138055 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:53:08.402110  138055 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:53:08.402139  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json: {Name:mkffa057af971791a1f31295e73c0dc23469a3c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:08.402315  138055 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:53:25.292404  138055 start.go:364] duration metric: took 16.890056768s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 00:53:25.292491  138055 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 00:53:25.292626  138055 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 00:53:25.294475  138055 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 00:53:25.294695  138055 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:53:25.294750  138055 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:53:25.312063  138055 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I1212 00:53:25.312570  138055 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:53:25.313178  138055 main.go:141] libmachine: Using API Version  1
	I1212 00:53:25.313207  138055 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:53:25.313538  138055 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:53:25.313727  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:25.313886  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:25.314065  138055 start.go:159] libmachine.API.Create for "old-k8s-version-738445" (driver="kvm2")
	I1212 00:53:25.314112  138055 client.go:168] LocalClient.Create starting
	I1212 00:53:25.314151  138055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 00:53:25.314198  138055 main.go:141] libmachine: Decoding PEM data...
	I1212 00:53:25.314219  138055 main.go:141] libmachine: Parsing certificate...
	I1212 00:53:25.314290  138055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 00:53:25.314327  138055 main.go:141] libmachine: Decoding PEM data...
	I1212 00:53:25.314350  138055 main.go:141] libmachine: Parsing certificate...
	I1212 00:53:25.314382  138055 main.go:141] libmachine: Running pre-create checks...
	I1212 00:53:25.314396  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .PreCreateCheck
	I1212 00:53:25.314820  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 00:53:25.315284  138055 main.go:141] libmachine: Creating machine...
	I1212 00:53:25.315301  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .Create
	I1212 00:53:25.315451  138055 main.go:141] libmachine: (old-k8s-version-738445) Creating KVM machine...
	I1212 00:53:25.316670  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found existing default KVM network
	I1212 00:53:25.317777  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.317628  138286 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:57:bc} reservation:<nil>}
	I1212 00:53:25.318492  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.318432  138286 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:08:2e:41} reservation:<nil>}
	I1212 00:53:25.319410  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.319341  138286 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:40:e4:4f} reservation:<nil>}
	I1212 00:53:25.320811  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.320726  138286 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00039b0e0}
	I1212 00:53:25.320834  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | created network xml: 
	I1212 00:53:25.320853  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | <network>
	I1212 00:53:25.320865  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |   <name>mk-old-k8s-version-738445</name>
	I1212 00:53:25.320874  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |   <dns enable='no'/>
	I1212 00:53:25.320890  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |   
	I1212 00:53:25.320924  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1212 00:53:25.320949  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |     <dhcp>
	I1212 00:53:25.320962  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1212 00:53:25.320983  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |     </dhcp>
	I1212 00:53:25.320995  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |   </ip>
	I1212 00:53:25.321017  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG |   
	I1212 00:53:25.321031  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | </network>
	I1212 00:53:25.321041  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | 
	I1212 00:53:25.326259  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | trying to create private KVM network mk-old-k8s-version-738445 192.168.72.0/24...
	I1212 00:53:25.401548  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | private KVM network mk-old-k8s-version-738445 192.168.72.0/24 created
	I1212 00:53:25.401603  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.401501  138286 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:53:25.401620  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445 ...
	I1212 00:53:25.401645  138055 main.go:141] libmachine: (old-k8s-version-738445) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 00:53:25.401668  138055 main.go:141] libmachine: (old-k8s-version-738445) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 00:53:25.685426  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.685258  138286 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa...
	I1212 00:53:25.875162  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.875006  138286 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/old-k8s-version-738445.rawdisk...
	I1212 00:53:25.875204  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Writing magic tar header
	I1212 00:53:25.875227  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Writing SSH key tar header
	I1212 00:53:25.875238  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:25.875133  138286 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445 ...
	I1212 00:53:25.875255  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445
	I1212 00:53:25.875369  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445 (perms=drwx------)
	I1212 00:53:25.875408  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 00:53:25.875419  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 00:53:25.875440  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:53:25.875452  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 00:53:25.875470  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 00:53:25.875481  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home/jenkins
	I1212 00:53:25.875490  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Checking permissions on dir: /home
	I1212 00:53:25.875501  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Skipping /home - not owner
	I1212 00:53:25.875511  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 00:53:25.875528  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 00:53:25.875545  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 00:53:25.875559  138055 main.go:141] libmachine: (old-k8s-version-738445) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 00:53:25.875567  138055 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 00:53:25.876726  138055 main.go:141] libmachine: (old-k8s-version-738445) define libvirt domain using xml: 
	I1212 00:53:25.876755  138055 main.go:141] libmachine: (old-k8s-version-738445) <domain type='kvm'>
	I1212 00:53:25.876766  138055 main.go:141] libmachine: (old-k8s-version-738445)   <name>old-k8s-version-738445</name>
	I1212 00:53:25.876778  138055 main.go:141] libmachine: (old-k8s-version-738445)   <memory unit='MiB'>2200</memory>
	I1212 00:53:25.876792  138055 main.go:141] libmachine: (old-k8s-version-738445)   <vcpu>2</vcpu>
	I1212 00:53:25.876799  138055 main.go:141] libmachine: (old-k8s-version-738445)   <features>
	I1212 00:53:25.876819  138055 main.go:141] libmachine: (old-k8s-version-738445)     <acpi/>
	I1212 00:53:25.876834  138055 main.go:141] libmachine: (old-k8s-version-738445)     <apic/>
	I1212 00:53:25.876841  138055 main.go:141] libmachine: (old-k8s-version-738445)     <pae/>
	I1212 00:53:25.876851  138055 main.go:141] libmachine: (old-k8s-version-738445)     
	I1212 00:53:25.876863  138055 main.go:141] libmachine: (old-k8s-version-738445)   </features>
	I1212 00:53:25.876873  138055 main.go:141] libmachine: (old-k8s-version-738445)   <cpu mode='host-passthrough'>
	I1212 00:53:25.876884  138055 main.go:141] libmachine: (old-k8s-version-738445)   
	I1212 00:53:25.876893  138055 main.go:141] libmachine: (old-k8s-version-738445)   </cpu>
	I1212 00:53:25.876904  138055 main.go:141] libmachine: (old-k8s-version-738445)   <os>
	I1212 00:53:25.876914  138055 main.go:141] libmachine: (old-k8s-version-738445)     <type>hvm</type>
	I1212 00:53:25.876951  138055 main.go:141] libmachine: (old-k8s-version-738445)     <boot dev='cdrom'/>
	I1212 00:53:25.876974  138055 main.go:141] libmachine: (old-k8s-version-738445)     <boot dev='hd'/>
	I1212 00:53:25.877007  138055 main.go:141] libmachine: (old-k8s-version-738445)     <bootmenu enable='no'/>
	I1212 00:53:25.877037  138055 main.go:141] libmachine: (old-k8s-version-738445)   </os>
	I1212 00:53:25.877058  138055 main.go:141] libmachine: (old-k8s-version-738445)   <devices>
	I1212 00:53:25.877080  138055 main.go:141] libmachine: (old-k8s-version-738445)     <disk type='file' device='cdrom'>
	I1212 00:53:25.877101  138055 main.go:141] libmachine: (old-k8s-version-738445)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/boot2docker.iso'/>
	I1212 00:53:25.877114  138055 main.go:141] libmachine: (old-k8s-version-738445)       <target dev='hdc' bus='scsi'/>
	I1212 00:53:25.877137  138055 main.go:141] libmachine: (old-k8s-version-738445)       <readonly/>
	I1212 00:53:25.877160  138055 main.go:141] libmachine: (old-k8s-version-738445)     </disk>
	I1212 00:53:25.877173  138055 main.go:141] libmachine: (old-k8s-version-738445)     <disk type='file' device='disk'>
	I1212 00:53:25.877187  138055 main.go:141] libmachine: (old-k8s-version-738445)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 00:53:25.877218  138055 main.go:141] libmachine: (old-k8s-version-738445)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/old-k8s-version-738445.rawdisk'/>
	I1212 00:53:25.877237  138055 main.go:141] libmachine: (old-k8s-version-738445)       <target dev='hda' bus='virtio'/>
	I1212 00:53:25.877268  138055 main.go:141] libmachine: (old-k8s-version-738445)     </disk>
	I1212 00:53:25.877293  138055 main.go:141] libmachine: (old-k8s-version-738445)     <interface type='network'>
	I1212 00:53:25.877312  138055 main.go:141] libmachine: (old-k8s-version-738445)       <source network='mk-old-k8s-version-738445'/>
	I1212 00:53:25.877323  138055 main.go:141] libmachine: (old-k8s-version-738445)       <model type='virtio'/>
	I1212 00:53:25.877344  138055 main.go:141] libmachine: (old-k8s-version-738445)     </interface>
	I1212 00:53:25.877355  138055 main.go:141] libmachine: (old-k8s-version-738445)     <interface type='network'>
	I1212 00:53:25.877368  138055 main.go:141] libmachine: (old-k8s-version-738445)       <source network='default'/>
	I1212 00:53:25.877385  138055 main.go:141] libmachine: (old-k8s-version-738445)       <model type='virtio'/>
	I1212 00:53:25.877398  138055 main.go:141] libmachine: (old-k8s-version-738445)     </interface>
	I1212 00:53:25.877409  138055 main.go:141] libmachine: (old-k8s-version-738445)     <serial type='pty'>
	I1212 00:53:25.877419  138055 main.go:141] libmachine: (old-k8s-version-738445)       <target port='0'/>
	I1212 00:53:25.877429  138055 main.go:141] libmachine: (old-k8s-version-738445)     </serial>
	I1212 00:53:25.877439  138055 main.go:141] libmachine: (old-k8s-version-738445)     <console type='pty'>
	I1212 00:53:25.877450  138055 main.go:141] libmachine: (old-k8s-version-738445)       <target type='serial' port='0'/>
	I1212 00:53:25.877466  138055 main.go:141] libmachine: (old-k8s-version-738445)     </console>
	I1212 00:53:25.877480  138055 main.go:141] libmachine: (old-k8s-version-738445)     <rng model='virtio'>
	I1212 00:53:25.877494  138055 main.go:141] libmachine: (old-k8s-version-738445)       <backend model='random'>/dev/random</backend>
	I1212 00:53:25.877504  138055 main.go:141] libmachine: (old-k8s-version-738445)     </rng>
	I1212 00:53:25.877512  138055 main.go:141] libmachine: (old-k8s-version-738445)     
	I1212 00:53:25.877521  138055 main.go:141] libmachine: (old-k8s-version-738445)     
	I1212 00:53:25.877530  138055 main.go:141] libmachine: (old-k8s-version-738445)   </devices>
	I1212 00:53:25.877539  138055 main.go:141] libmachine: (old-k8s-version-738445) </domain>
	I1212 00:53:25.877561  138055 main.go:141] libmachine: (old-k8s-version-738445) 
	I1212 00:53:25.881577  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:28:da:8f in network default
	I1212 00:53:25.882174  138055 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 00:53:25.882196  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:25.882876  138055 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 00:53:25.883258  138055 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 00:53:25.883840  138055 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 00:53:25.884612  138055 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 00:53:27.225290  138055 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 00:53:27.226259  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:27.226804  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:27.226882  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:27.226794  138286 retry.go:31] will retry after 236.486121ms: waiting for machine to come up
	I1212 00:53:27.465703  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:27.466369  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:27.466394  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:27.466325  138286 retry.go:31] will retry after 337.96514ms: waiting for machine to come up
	I1212 00:53:27.806083  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:27.806990  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:27.807017  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:27.806891  138286 retry.go:31] will retry after 362.459687ms: waiting for machine to come up
	I1212 00:53:28.171383  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:28.172044  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:28.172075  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:28.171997  138286 retry.go:31] will retry after 367.225806ms: waiting for machine to come up
	I1212 00:53:28.541382  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:28.541868  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:28.541896  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:28.541819  138286 retry.go:31] will retry after 725.148146ms: waiting for machine to come up
	I1212 00:53:29.268786  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:29.269166  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:29.269196  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:29.269116  138286 retry.go:31] will retry after 689.252764ms: waiting for machine to come up
	I1212 00:53:29.960062  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:29.960636  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:29.960659  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:29.960579  138286 retry.go:31] will retry after 1.015251628s: waiting for machine to come up
	I1212 00:53:30.977083  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:30.977682  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:30.977725  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:30.977648  138286 retry.go:31] will retry after 1.109718049s: waiting for machine to come up
	I1212 00:53:32.088913  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:32.089452  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:32.089474  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:32.089417  138286 retry.go:31] will retry after 1.471149787s: waiting for machine to come up
	I1212 00:53:33.563153  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:33.563643  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:33.563671  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:33.563611  138286 retry.go:31] will retry after 2.308179039s: waiting for machine to come up
	I1212 00:53:35.873656  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:35.874149  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:35.874183  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:35.874086  138286 retry.go:31] will retry after 2.8626221s: waiting for machine to come up
	I1212 00:53:38.738853  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:38.739512  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:38.739578  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:38.739479  138286 retry.go:31] will retry after 3.503601041s: waiting for machine to come up
	I1212 00:53:42.244847  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:42.245424  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 00:53:42.245450  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 00:53:42.245379  138286 retry.go:31] will retry after 4.523558734s: waiting for machine to come up
	I1212 00:53:46.770593  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.771214  138055 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 00:53:46.771239  138055 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 00:53:46.771253  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.771565  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445
	I1212 00:53:46.849212  138055 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 00:53:46.849244  138055 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 00:53:46.849254  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 00:53:46.852948  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.853378  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:minikube Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:46.853410  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.853557  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 00:53:46.853580  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 00:53:46.853615  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 00:53:46.853722  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 00:53:46.853738  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 00:53:46.984036  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 00:53:46.984310  138055 main.go:141] libmachine: (old-k8s-version-738445) KVM machine creation complete!
	I1212 00:53:46.984647  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 00:53:46.985225  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:46.985426  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:46.985601  138055 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 00:53:46.985617  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 00:53:46.986967  138055 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 00:53:46.986983  138055 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 00:53:46.986991  138055 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 00:53:46.987000  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:46.989203  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.989575  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:46.989602  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:46.989763  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:46.989932  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:46.990086  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:46.990282  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:46.990467  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:46.990709  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:46.990723  138055 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 00:53:47.107578  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:53:47.107628  138055 main.go:141] libmachine: Detecting the provisioner...
	I1212 00:53:47.107641  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.110797  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.111227  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.111260  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.111391  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.111546  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.111687  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.111818  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.112005  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.112216  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.112230  138055 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 00:53:47.233295  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 00:53:47.233426  138055 main.go:141] libmachine: found compatible host: buildroot
	I1212 00:53:47.233447  138055 main.go:141] libmachine: Provisioning with buildroot...
	I1212 00:53:47.233460  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:47.233695  138055 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 00:53:47.233721  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:47.233873  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.238045  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.238507  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.238530  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.238718  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.239010  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.239194  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.239482  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.241668  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.242037  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.242058  138055 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 00:53:47.381979  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 00:53:47.382017  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.386217  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.386670  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.386699  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.386897  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.387140  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.387382  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.387561  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.387780  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.388010  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.388037  138055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 00:53:47.526170  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 00:53:47.526202  138055 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 00:53:47.526247  138055 buildroot.go:174] setting up certificates
	I1212 00:53:47.526263  138055 provision.go:84] configureAuth start
	I1212 00:53:47.526282  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 00:53:47.526585  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:47.530840  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.531273  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.531299  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.531509  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.534355  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.534932  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.534999  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.535064  138055 provision.go:143] copyHostCerts
	I1212 00:53:47.535133  138055 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 00:53:47.535156  138055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 00:53:47.535217  138055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 00:53:47.535313  138055 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 00:53:47.535325  138055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 00:53:47.535352  138055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 00:53:47.535402  138055 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 00:53:47.535409  138055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 00:53:47.535426  138055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 00:53:47.535471  138055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 00:53:47.766781  138055 provision.go:177] copyRemoteCerts
	I1212 00:53:47.766863  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 00:53:47.766903  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.769671  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.770060  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.770092  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.770232  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.770454  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.770634  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.770754  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:47.858160  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 00:53:47.883236  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 00:53:47.907976  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 00:53:47.936945  138055 provision.go:87] duration metric: took 410.660837ms to configureAuth
	I1212 00:53:47.936976  138055 buildroot.go:189] setting minikube options for container-runtime
	I1212 00:53:47.937236  138055 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:53:47.937333  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:47.940083  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.940407  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:47.940434  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:47.940603  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:47.940804  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.940956  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:47.941081  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:47.941222  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:47.941427  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:47.941444  138055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 00:53:48.173751  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 00:53:48.173790  138055 main.go:141] libmachine: Checking connection to Docker...
	I1212 00:53:48.173803  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetURL
	I1212 00:53:48.175045  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using libvirt version 6000000
	I1212 00:53:48.177030  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.177455  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.177492  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.177655  138055 main.go:141] libmachine: Docker is up and running!
	I1212 00:53:48.177672  138055 main.go:141] libmachine: Reticulating splines...
	I1212 00:53:48.177681  138055 client.go:171] duration metric: took 22.863556961s to LocalClient.Create
	I1212 00:53:48.177708  138055 start.go:167] duration metric: took 22.863646729s to libmachine.API.Create "old-k8s-version-738445"
	I1212 00:53:48.177718  138055 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 00:53:48.177728  138055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 00:53:48.177746  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.177982  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 00:53:48.178006  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.180328  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.180548  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.180572  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.180700  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.180883  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.181058  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.181225  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:48.266828  138055 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 00:53:48.271726  138055 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 00:53:48.271766  138055 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 00:53:48.271883  138055 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 00:53:48.272001  138055 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 00:53:48.272148  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 00:53:48.282334  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:53:48.307128  138055 start.go:296] duration metric: took 129.393853ms for postStartSetup
	I1212 00:53:48.307191  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 00:53:48.307820  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:48.310377  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.310751  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.310783  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.311055  138055 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:53:48.311308  138055 start.go:128] duration metric: took 23.018665207s to createHost
	I1212 00:53:48.311355  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.313787  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.314135  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.314175  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.314281  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.314461  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.314638  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.314816  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.314979  138055 main.go:141] libmachine: Using SSH client type: native
	I1212 00:53:48.315139  138055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 00:53:48.315159  138055 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 00:53:48.428432  138055 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733964828.396789907
	
	I1212 00:53:48.428467  138055 fix.go:216] guest clock: 1733964828.396789907
	I1212 00:53:48.428477  138055 fix.go:229] Guest: 2024-12-12 00:53:48.396789907 +0000 UTC Remote: 2024-12-12 00:53:48.311327143 +0000 UTC m=+41.451880126 (delta=85.462764ms)
	I1212 00:53:48.428506  138055 fix.go:200] guest clock delta is within tolerance: 85.462764ms
	I1212 00:53:48.428513  138055 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 23.136065357s
	I1212 00:53:48.428543  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.428856  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:48.431413  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.431747  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.431776  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.431988  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.432521  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.432712  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:53:48.432830  138055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 00:53:48.432877  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.432987  138055 ssh_runner.go:195] Run: cat /version.json
	I1212 00:53:48.433045  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 00:53:48.435852  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436042  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436182  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.436224  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436332  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:48.436362  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.436376  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:48.436571  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 00:53:48.436583  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.436749  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 00:53:48.436756  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.436894  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 00:53:48.436951  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:48.436999  138055 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 00:53:48.544837  138055 ssh_runner.go:195] Run: systemctl --version
	I1212 00:53:48.550950  138055 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 00:53:48.713757  138055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 00:53:48.720651  138055 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 00:53:48.720729  138055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 00:53:48.739373  138055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 00:53:48.739399  138055 start.go:495] detecting cgroup driver to use...
	I1212 00:53:48.739472  138055 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 00:53:48.756005  138055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 00:53:48.771172  138055 docker.go:217] disabling cri-docker service (if available) ...
	I1212 00:53:48.771245  138055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 00:53:48.785436  138055 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 00:53:48.799035  138055 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 00:53:48.918566  138055 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 00:53:49.087209  138055 docker.go:233] disabling docker service ...
	I1212 00:53:49.087299  138055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 00:53:49.104032  138055 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 00:53:49.117720  138055 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 00:53:49.250570  138055 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 00:53:49.377245  138055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 00:53:49.394321  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 00:53:49.414554  138055 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 00:53:49.414627  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.426149  138055 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 00:53:49.426218  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.437592  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.450689  138055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 00:53:49.463493  138055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 00:53:49.476461  138055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 00:53:49.487925  138055 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 00:53:49.487976  138055 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 00:53:49.504080  138055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 00:53:49.514170  138055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:53:49.629214  138055 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 00:53:49.724919  138055 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 00:53:49.724994  138055 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 00:53:49.729883  138055 start.go:563] Will wait 60s for crictl version
	I1212 00:53:49.729942  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:49.733837  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 00:53:49.773542  138055 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 00:53:49.773634  138055 ssh_runner.go:195] Run: crio --version
	I1212 00:53:49.802300  138055 ssh_runner.go:195] Run: crio --version
	I1212 00:53:49.833253  138055 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 00:53:49.834544  138055 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 00:53:49.837424  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:49.837750  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 01:53:41 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 00:53:49.837779  138055 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 00:53:49.837961  138055 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 00:53:49.842269  138055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:53:49.855177  138055 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 00:53:49.855291  138055 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:53:49.855358  138055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:53:49.888520  138055 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 00:53:49.888582  138055 ssh_runner.go:195] Run: which lz4
	I1212 00:53:49.892632  138055 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 00:53:49.896990  138055 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 00:53:49.897022  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 00:53:51.546417  138055 crio.go:462] duration metric: took 1.653819988s to copy over tarball
	I1212 00:53:51.546504  138055 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 00:53:54.097580  138055 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551044087s)
	I1212 00:53:54.097608  138055 crio.go:469] duration metric: took 2.551162212s to extract the tarball
	I1212 00:53:54.097616  138055 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 00:53:54.139503  138055 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 00:53:54.186743  138055 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 00:53:54.186772  138055 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 00:53:54.186859  138055 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.186916  138055 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.186929  138055 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.186859  138055 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:53:54.186877  138055 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.186869  138055 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 00:53:54.186862  138055 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.186949  138055 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.188362  138055 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.188397  138055 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.188359  138055 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.188512  138055 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:53:54.188588  138055 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 00:53:54.188661  138055 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.188727  138055 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.188817  138055 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.350701  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.350835  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.352355  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.366836  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.366880  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.391857  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.417014  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 00:53:54.484105  138055 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 00:53:54.484151  138055 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.484158  138055 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 00:53:54.484192  138055 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.484203  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.484236  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.513386  138055 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 00:53:54.513439  138055 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.513492  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.556947  138055 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 00:53:54.557037  138055 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.556956  138055 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 00:53:54.557097  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.557146  138055 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.557216  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.573431  138055 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 00:53:54.573475  138055 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 00:53:54.573487  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.573503  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.573545  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.573568  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.573548  138055 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 00:53:54.573617  138055 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.573643  138055 ssh_runner.go:195] Run: which crictl
	I1212 00:53:54.573689  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.573707  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.713461  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.713542  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:53:54.713582  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.713605  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.713658  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.713661  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:54.713720  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.882997  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 00:53:54.900765  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 00:53:54.900819  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:53:54.900851  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 00:53:54.900909  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 00:53:54.900937  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 00:53:54.901031  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:55.054624  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 00:53:55.083875  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 00:53:55.083977  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 00:53:55.084032  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 00:53:55.084056  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 00:53:55.084131  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 00:53:55.084205  138055 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 00:53:55.138532  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 00:53:55.138561  138055 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 00:53:56.483700  138055 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 00:53:56.626509  138055 cache_images.go:92] duration metric: took 2.439719978s to LoadCachedImages
	W1212 00:53:56.626604  138055 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1212 00:53:56.626622  138055 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 00:53:56.626754  138055 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 00:53:56.626849  138055 ssh_runner.go:195] Run: crio config
	I1212 00:53:56.682634  138055 cni.go:84] Creating CNI manager for ""
	I1212 00:53:56.682657  138055 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:53:56.682666  138055 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 00:53:56.682685  138055 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 00:53:56.682819  138055 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 00:53:56.682874  138055 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 00:53:56.693189  138055 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 00:53:56.693256  138055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 00:53:56.703035  138055 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 00:53:56.721213  138055 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 00:53:56.740475  138055 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 00:53:56.761205  138055 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 00:53:56.765570  138055 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 00:53:56.778588  138055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 00:53:56.892049  138055 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 00:53:56.910148  138055 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 00:53:56.910182  138055 certs.go:194] generating shared ca certs ...
	I1212 00:53:56.910205  138055 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:56.910411  138055 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 00:53:56.910474  138055 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 00:53:56.910488  138055 certs.go:256] generating profile certs ...
	I1212 00:53:56.910575  138055 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 00:53:56.910595  138055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt with IP's: []
	I1212 00:53:56.989886  138055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt ...
	I1212 00:53:56.989917  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: {Name:mk064b3664718e7760e88dca13a69b246be0893f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:56.990097  138055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key ...
	I1212 00:53:56.990111  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key: {Name:mkb62938ccd86d2225dee25ce299c41f9d999785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:56.990217  138055 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 00:53:56.990240  138055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.25]
	I1212 00:53:57.071943  138055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55 ...
	I1212 00:53:57.071972  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55: {Name:mkba07fa4d8014c68b7a351d7951c9385e1e2619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.115089  138055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55 ...
	I1212 00:53:57.115149  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55: {Name:mkfa1a5241546321a9fa119856def155ee10bc45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.115319  138055 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt.2e4d2e55 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt
	I1212 00:53:57.115426  138055 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key
	I1212 00:53:57.115496  138055 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 00:53:57.115517  138055 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt with IP's: []
	I1212 00:53:57.382065  138055 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt ...
	I1212 00:53:57.382097  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt: {Name:mkc721b7ebf5c96aa6a76ab3eee8bcbae84cf792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.382294  138055 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key ...
	I1212 00:53:57.382312  138055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key: {Name:mk706ec48e68acb47ca47e2e900c95b77c78f8df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 00:53:57.382667  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 00:53:57.382717  138055 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 00:53:57.382724  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 00:53:57.382748  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 00:53:57.382771  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 00:53:57.382788  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 00:53:57.382826  138055 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 00:53:57.383483  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 00:53:57.414973  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 00:53:57.443818  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 00:53:57.471344  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 00:53:57.498234  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 00:53:57.541688  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 00:53:57.573330  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 00:53:57.600018  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 00:53:57.626179  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 00:53:57.651312  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 00:53:57.675818  138055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 00:53:57.703345  138055 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 00:53:57.728172  138055 ssh_runner.go:195] Run: openssl version
	I1212 00:53:57.740060  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 00:53:57.754964  138055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:57.761914  138055 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:57.761996  138055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 00:53:57.772481  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 00:53:57.787053  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 00:53:57.802538  138055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 00:53:57.808498  138055 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 00:53:57.808570  138055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 00:53:57.821217  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 00:53:57.836671  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 00:53:57.851342  138055 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 00:53:57.856428  138055 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 00:53:57.856497  138055 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 00:53:57.862582  138055 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 00:53:57.877439  138055 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 00:53:57.882106  138055 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 00:53:57.882166  138055 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:53:57.882250  138055 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 00:53:57.882317  138055 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 00:53:57.923013  138055 cri.go:89] found id: ""
	I1212 00:53:57.923105  138055 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 00:53:57.933758  138055 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 00:53:57.944564  138055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:53:57.955148  138055 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:53:57.955175  138055 kubeadm.go:157] found existing configuration files:
	
	I1212 00:53:57.955220  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:53:57.965723  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:53:57.965793  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:53:57.977012  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:53:57.988459  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:53:57.988530  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:53:58.000510  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:53:58.012000  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:53:58.012054  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:53:58.025206  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:53:58.037105  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:53:58.037166  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:53:58.052079  138055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 00:53:58.179400  138055 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 00:53:58.179494  138055 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 00:53:58.331809  138055 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:53:58.331978  138055 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:53:58.332140  138055 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:53:58.532156  138055 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:53:58.533778  138055 out.go:235]   - Generating certificates and keys ...
	I1212 00:53:58.533892  138055 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 00:53:58.533984  138055 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 00:53:58.764206  138055 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 00:53:58.866607  138055 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1212 00:53:59.067523  138055 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1212 00:53:59.239198  138055 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1212 00:53:59.429160  138055 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1212 00:53:59.429480  138055 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	I1212 00:53:59.756852  138055 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1212 00:53:59.757250  138055 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	I1212 00:53:59.972095  138055 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 00:54:00.105563  138055 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 00:54:00.290134  138055 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1212 00:54:00.290446  138055 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:54:00.389600  138055 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:54:00.909404  138055 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:54:01.241855  138055 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:54:01.336918  138055 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:54:01.367914  138055 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:54:01.369603  138055 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:54:01.369722  138055 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 00:54:01.538222  138055 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:54:01.540331  138055 out.go:235]   - Booting up control plane ...
	I1212 00:54:01.540473  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:54:01.552275  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:54:01.553790  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:54:01.555130  138055 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:54:01.560532  138055 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:54:41.553257  138055 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 00:54:41.553518  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:54:41.553831  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:54:46.554577  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:54:46.554893  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:54:56.554017  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:54:56.554257  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:55:16.553105  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:55:16.553350  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:55:56.554769  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:55:56.555057  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:55:56.555072  138055 kubeadm.go:310] 
	I1212 00:55:56.555135  138055 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 00:55:56.555183  138055 kubeadm.go:310] 		timed out waiting for the condition
	I1212 00:55:56.555191  138055 kubeadm.go:310] 
	I1212 00:55:56.555220  138055 kubeadm.go:310] 	This error is likely caused by:
	I1212 00:55:56.555249  138055 kubeadm.go:310] 		- The kubelet is not running
	I1212 00:55:56.555347  138055 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:55:56.555362  138055 kubeadm.go:310] 
	I1212 00:55:56.555468  138055 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:55:56.555521  138055 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 00:55:56.555555  138055 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 00:55:56.555572  138055 kubeadm.go:310] 
	I1212 00:55:56.555757  138055 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 00:55:56.555860  138055 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 00:55:56.555873  138055 kubeadm.go:310] 
	I1212 00:55:56.555990  138055 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 00:55:56.556115  138055 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 00:55:56.556216  138055 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 00:55:56.556283  138055 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 00:55:56.556298  138055 kubeadm.go:310] 
	I1212 00:55:56.556621  138055 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:55:56.556753  138055 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 00:55:56.556848  138055 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 00:55:56.556997  138055 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-738445] and IPs [192.168.72.25 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 00:55:56.557050  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 00:55:58.632496  138055 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.075403207s)
	I1212 00:55:58.632580  138055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:55:58.653078  138055 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 00:55:58.665229  138055 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 00:55:58.665251  138055 kubeadm.go:157] found existing configuration files:
	
	I1212 00:55:58.665301  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 00:55:58.674709  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 00:55:58.674780  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 00:55:58.684371  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 00:55:58.693799  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 00:55:58.693882  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 00:55:58.703785  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 00:55:58.712887  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 00:55:58.712941  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 00:55:58.722218  138055 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 00:55:58.731202  138055 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 00:55:58.731268  138055 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 00:55:58.740572  138055 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 00:55:58.809046  138055 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 00:55:58.809135  138055 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 00:55:58.946762  138055 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 00:55:58.946908  138055 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 00:55:58.947042  138055 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 00:55:59.152256  138055 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 00:55:59.154094  138055 out.go:235]   - Generating certificates and keys ...
	I1212 00:55:59.154206  138055 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 00:55:59.154324  138055 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 00:55:59.154449  138055 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 00:55:59.154532  138055 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 00:55:59.154624  138055 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 00:55:59.154714  138055 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 00:55:59.154804  138055 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 00:55:59.154893  138055 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 00:55:59.155002  138055 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 00:55:59.155103  138055 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 00:55:59.155157  138055 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 00:55:59.155236  138055 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 00:55:59.201645  138055 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 00:55:59.311480  138055 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 00:55:59.404048  138055 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 00:55:59.575776  138055 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 00:55:59.592558  138055 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 00:55:59.592697  138055 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 00:55:59.592746  138055 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 00:55:59.793401  138055 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 00:55:59.796212  138055 out.go:235]   - Booting up control plane ...
	I1212 00:55:59.796365  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 00:55:59.803660  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 00:55:59.805265  138055 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 00:55:59.818160  138055 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 00:55:59.821646  138055 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 00:56:39.824563  138055 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 00:56:39.824688  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:56:39.824997  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:56:44.826003  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:56:44.826301  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:56:54.827142  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:56:54.827395  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:57:14.826112  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:57:14.826304  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:57:54.825710  138055 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 00:57:54.825952  138055 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 00:57:54.825971  138055 kubeadm.go:310] 
	I1212 00:57:54.826007  138055 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 00:57:54.826040  138055 kubeadm.go:310] 		timed out waiting for the condition
	I1212 00:57:54.826047  138055 kubeadm.go:310] 
	I1212 00:57:54.826093  138055 kubeadm.go:310] 	This error is likely caused by:
	I1212 00:57:54.826126  138055 kubeadm.go:310] 		- The kubelet is not running
	I1212 00:57:54.826214  138055 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 00:57:54.826221  138055 kubeadm.go:310] 
	I1212 00:57:54.826311  138055 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 00:57:54.826340  138055 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 00:57:54.826369  138055 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 00:57:54.826375  138055 kubeadm.go:310] 
	I1212 00:57:54.826533  138055 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 00:57:54.826678  138055 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 00:57:54.826690  138055 kubeadm.go:310] 
	I1212 00:57:54.826818  138055 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 00:57:54.826901  138055 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 00:57:54.826965  138055 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 00:57:54.827026  138055 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 00:57:54.827034  138055 kubeadm.go:310] 
	I1212 00:57:54.828086  138055 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 00:57:54.828184  138055 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 00:57:54.828262  138055 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 00:57:54.828340  138055 kubeadm.go:394] duration metric: took 3m56.946179084s to StartCluster
	I1212 00:57:54.828382  138055 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 00:57:54.828429  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 00:57:54.884529  138055 cri.go:89] found id: ""
	I1212 00:57:54.884550  138055 logs.go:282] 0 containers: []
	W1212 00:57:54.884558  138055 logs.go:284] No container was found matching "kube-apiserver"
	I1212 00:57:54.884564  138055 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 00:57:54.884618  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 00:57:54.920759  138055 cri.go:89] found id: ""
	I1212 00:57:54.920796  138055 logs.go:282] 0 containers: []
	W1212 00:57:54.920806  138055 logs.go:284] No container was found matching "etcd"
	I1212 00:57:54.920813  138055 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 00:57:54.920878  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 00:57:54.957261  138055 cri.go:89] found id: ""
	I1212 00:57:54.957306  138055 logs.go:282] 0 containers: []
	W1212 00:57:54.957320  138055 logs.go:284] No container was found matching "coredns"
	I1212 00:57:54.957328  138055 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 00:57:54.957398  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 00:57:54.991470  138055 cri.go:89] found id: ""
	I1212 00:57:54.991503  138055 logs.go:282] 0 containers: []
	W1212 00:57:54.991514  138055 logs.go:284] No container was found matching "kube-scheduler"
	I1212 00:57:54.991522  138055 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 00:57:54.991586  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 00:57:55.025518  138055 cri.go:89] found id: ""
	I1212 00:57:55.025546  138055 logs.go:282] 0 containers: []
	W1212 00:57:55.025554  138055 logs.go:284] No container was found matching "kube-proxy"
	I1212 00:57:55.025561  138055 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 00:57:55.025614  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 00:57:55.060064  138055 cri.go:89] found id: ""
	I1212 00:57:55.060090  138055 logs.go:282] 0 containers: []
	W1212 00:57:55.060098  138055 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 00:57:55.060114  138055 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 00:57:55.060166  138055 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 00:57:55.094524  138055 cri.go:89] found id: ""
	I1212 00:57:55.094553  138055 logs.go:282] 0 containers: []
	W1212 00:57:55.094560  138055 logs.go:284] No container was found matching "kindnet"
	I1212 00:57:55.094571  138055 logs.go:123] Gathering logs for kubelet ...
	I1212 00:57:55.094585  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 00:57:55.141960  138055 logs.go:123] Gathering logs for dmesg ...
	I1212 00:57:55.141995  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 00:57:55.157785  138055 logs.go:123] Gathering logs for describe nodes ...
	I1212 00:57:55.157822  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 00:57:55.287105  138055 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 00:57:55.287129  138055 logs.go:123] Gathering logs for CRI-O ...
	I1212 00:57:55.287146  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 00:57:55.389098  138055 logs.go:123] Gathering logs for container status ...
	I1212 00:57:55.389142  138055 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 00:57:55.429850  138055 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 00:57:55.429900  138055 out.go:270] * 
	* 
	W1212 00:57:55.429952  138055 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:57:55.429966  138055 out.go:270] * 
	* 
	W1212 00:57:55.430795  138055 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:57:55.434427  138055 out.go:201] 
	W1212 00:57:55.435569  138055 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 00:57:55.435637  138055 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 00:57:55.435666  138055 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 00:57:55.437182  138055 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-738445 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
E1212 00:57:55.698218   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 6 (235.919088ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:55.715285  141201 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-738445" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (288.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-242725 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-242725 --alsologtostderr -v=3: exit status 82 (2m0.772793762s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-242725"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:55:30.431541  139898 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:55:30.431726  139898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:55:30.431742  139898 out.go:358] Setting ErrFile to fd 2...
	I1212 00:55:30.431748  139898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:55:30.432062  139898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:55:30.432429  139898 out.go:352] Setting JSON to false
	I1212 00:55:30.432519  139898 mustload.go:65] Loading cluster: no-preload-242725
	I1212 00:55:30.433065  139898 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:55:30.433178  139898 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 00:55:30.433422  139898 mustload.go:65] Loading cluster: no-preload-242725
	I1212 00:55:30.433580  139898 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:55:30.433624  139898 stop.go:39] StopHost: no-preload-242725
	I1212 00:55:30.434230  139898 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:55:30.434299  139898 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:55:30.456146  139898 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39293
	I1212 00:55:30.456764  139898 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:55:30.457434  139898 main.go:141] libmachine: Using API Version  1
	I1212 00:55:30.457461  139898 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:55:30.457958  139898 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:55:30.460549  139898 out.go:177] * Stopping node "no-preload-242725"  ...
	I1212 00:55:30.462297  139898 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:55:30.462348  139898 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 00:55:30.462856  139898 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:55:30.462883  139898 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 00:55:30.465844  139898 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:55:30.466354  139898 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 01:54:12 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 00:55:30.466389  139898 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 00:55:30.466627  139898 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 00:55:30.466788  139898 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 00:55:30.466979  139898 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 00:55:30.467142  139898 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 00:55:30.565918  139898 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:55:30.644135  139898 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:55:30.704288  139898 main.go:141] libmachine: Stopping "no-preload-242725"...
	I1212 00:55:30.704325  139898 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 00:55:30.706224  139898 main.go:141] libmachine: (no-preload-242725) Calling .Stop
	I1212 00:55:30.710450  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 0/120
	I1212 00:55:31.952804  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 1/120
	I1212 00:55:32.954405  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 2/120
	I1212 00:55:33.955822  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 3/120
	I1212 00:55:34.957217  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 4/120
	I1212 00:55:35.959393  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 5/120
	I1212 00:55:36.960724  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 6/120
	I1212 00:55:37.962071  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 7/120
	I1212 00:55:38.963493  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 8/120
	I1212 00:55:39.965718  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 9/120
	I1212 00:55:40.968077  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 10/120
	I1212 00:55:41.969609  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 11/120
	I1212 00:55:42.970892  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 12/120
	I1212 00:55:43.972854  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 13/120
	I1212 00:55:44.974354  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 14/120
	I1212 00:55:45.976553  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 15/120
	I1212 00:55:46.977734  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 16/120
	I1212 00:55:47.979220  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 17/120
	I1212 00:55:48.980502  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 18/120
	I1212 00:55:49.981844  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 19/120
	I1212 00:55:50.984203  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 20/120
	I1212 00:55:51.985472  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 21/120
	I1212 00:55:52.986840  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 22/120
	I1212 00:55:53.988263  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 23/120
	I1212 00:55:54.989720  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 24/120
	I1212 00:55:55.991791  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 25/120
	I1212 00:55:56.994102  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 26/120
	I1212 00:55:57.995317  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 27/120
	I1212 00:55:58.996958  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 28/120
	I1212 00:55:59.998577  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 29/120
	I1212 00:56:01.000792  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 30/120
	I1212 00:56:02.002469  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 31/120
	I1212 00:56:03.004502  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 32/120
	I1212 00:56:04.006077  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 33/120
	I1212 00:56:05.007381  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 34/120
	I1212 00:56:06.009451  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 35/120
	I1212 00:56:07.011211  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 36/120
	I1212 00:56:08.012718  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 37/120
	I1212 00:56:09.014147  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 38/120
	I1212 00:56:10.015734  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 39/120
	I1212 00:56:11.017939  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 40/120
	I1212 00:56:12.019308  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 41/120
	I1212 00:56:13.020726  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 42/120
	I1212 00:56:14.022507  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 43/120
	I1212 00:56:15.023783  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 44/120
	I1212 00:56:16.025820  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 45/120
	I1212 00:56:17.027126  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 46/120
	I1212 00:56:18.028487  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 47/120
	I1212 00:56:19.030081  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 48/120
	I1212 00:56:20.031269  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 49/120
	I1212 00:56:21.033010  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 50/120
	I1212 00:56:22.034385  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 51/120
	I1212 00:56:23.035665  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 52/120
	I1212 00:56:24.036885  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 53/120
	I1212 00:56:25.038150  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 54/120
	I1212 00:56:26.040160  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 55/120
	I1212 00:56:27.042028  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 56/120
	I1212 00:56:28.043223  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 57/120
	I1212 00:56:29.044707  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 58/120
	I1212 00:56:30.045963  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 59/120
	I1212 00:56:31.048109  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 60/120
	I1212 00:56:32.049933  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 61/120
	I1212 00:56:33.051375  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 62/120
	I1212 00:56:34.052745  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 63/120
	I1212 00:56:35.054286  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 64/120
	I1212 00:56:36.056203  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 65/120
	I1212 00:56:37.057680  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 66/120
	I1212 00:56:38.059371  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 67/120
	I1212 00:56:39.060876  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 68/120
	I1212 00:56:40.062512  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 69/120
	I1212 00:56:41.064583  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 70/120
	I1212 00:56:42.065938  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 71/120
	I1212 00:56:43.067294  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 72/120
	I1212 00:56:44.068646  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 73/120
	I1212 00:56:45.070394  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 74/120
	I1212 00:56:46.072128  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 75/120
	I1212 00:56:47.073718  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 76/120
	I1212 00:56:48.075089  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 77/120
	I1212 00:56:49.077456  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 78/120
	I1212 00:56:50.078944  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 79/120
	I1212 00:56:51.081320  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 80/120
	I1212 00:56:52.082624  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 81/120
	I1212 00:56:53.084114  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 82/120
	I1212 00:56:54.085794  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 83/120
	I1212 00:56:55.087017  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 84/120
	I1212 00:56:56.088940  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 85/120
	I1212 00:56:57.090204  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 86/120
	I1212 00:56:58.091428  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 87/120
	I1212 00:56:59.092771  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 88/120
	I1212 00:57:00.094000  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 89/120
	I1212 00:57:01.096115  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 90/120
	I1212 00:57:02.097378  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 91/120
	I1212 00:57:03.098755  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 92/120
	I1212 00:57:04.100171  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 93/120
	I1212 00:57:05.102110  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 94/120
	I1212 00:57:06.104119  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 95/120
	I1212 00:57:07.105551  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 96/120
	I1212 00:57:08.106813  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 97/120
	I1212 00:57:09.108193  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 98/120
	I1212 00:57:10.109495  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 99/120
	I1212 00:57:11.111747  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 100/120
	I1212 00:57:12.113232  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 101/120
	I1212 00:57:13.114467  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 102/120
	I1212 00:57:14.115956  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 103/120
	I1212 00:57:15.117215  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 104/120
	I1212 00:57:16.119239  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 105/120
	I1212 00:57:17.120522  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 106/120
	I1212 00:57:18.121899  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 107/120
	I1212 00:57:19.123295  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 108/120
	I1212 00:57:20.124685  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 109/120
	I1212 00:57:21.126739  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 110/120
	I1212 00:57:22.128080  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 111/120
	I1212 00:57:23.129442  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 112/120
	I1212 00:57:24.131194  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 113/120
	I1212 00:57:25.132447  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 114/120
	I1212 00:57:26.134258  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 115/120
	I1212 00:57:27.135571  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 116/120
	I1212 00:57:28.136868  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 117/120
	I1212 00:57:29.138199  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 118/120
	I1212 00:57:30.139475  139898 main.go:141] libmachine: (no-preload-242725) Waiting for machine to stop 119/120
	I1212 00:57:31.140933  139898 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1212 00:57:31.141011  139898 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 00:57:31.142922  139898 out.go:201] 
	W1212 00:57:31.144379  139898 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 00:57:31.144395  139898 out.go:270] * 
	* 
	W1212 00:57:31.147710  139898 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:57:31.149062  139898 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-242725 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725: exit status 3 (18.521532833s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:49.671979  140945 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host
	E1212 00:57:49.672002  140945 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-242725" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-607268 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-607268 --alsologtostderr -v=3: exit status 82 (2m0.520416028s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-607268"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:55:32.958642  140202 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:55:32.958774  140202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:55:32.958784  140202 out.go:358] Setting ErrFile to fd 2...
	I1212 00:55:32.958789  140202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:55:32.958972  140202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:55:32.959256  140202 out.go:352] Setting JSON to false
	I1212 00:55:32.959356  140202 mustload.go:65] Loading cluster: embed-certs-607268
	I1212 00:55:32.959804  140202 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:55:32.959886  140202 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 00:55:32.960068  140202 mustload.go:65] Loading cluster: embed-certs-607268
	I1212 00:55:32.960193  140202 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:55:32.960239  140202 stop.go:39] StopHost: embed-certs-607268
	I1212 00:55:32.960678  140202 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:55:32.960720  140202 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:55:32.975802  140202 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42803
	I1212 00:55:32.976305  140202 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:55:32.976903  140202 main.go:141] libmachine: Using API Version  1
	I1212 00:55:32.976935  140202 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:55:32.977234  140202 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:55:32.979855  140202 out.go:177] * Stopping node "embed-certs-607268"  ...
	I1212 00:55:32.981175  140202 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:55:32.981203  140202 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 00:55:32.981433  140202 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:55:32.981461  140202 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 00:55:32.984691  140202 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 00:55:32.985096  140202 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 01:54:39 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 00:55:32.985127  140202 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 00:55:32.985294  140202 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 00:55:32.985495  140202 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 00:55:32.985657  140202 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 00:55:32.985822  140202 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 00:55:33.102150  140202 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:55:33.159928  140202 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:55:33.225980  140202 main.go:141] libmachine: Stopping "embed-certs-607268"...
	I1212 00:55:33.226026  140202 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 00:55:33.229592  140202 main.go:141] libmachine: (embed-certs-607268) Calling .Stop
	I1212 00:55:33.234966  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 0/120
	I1212 00:55:34.236250  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 1/120
	I1212 00:55:35.238177  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 2/120
	I1212 00:55:36.240239  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 3/120
	I1212 00:55:37.241540  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 4/120
	I1212 00:55:38.243426  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 5/120
	I1212 00:55:39.244762  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 6/120
	I1212 00:55:40.246061  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 7/120
	I1212 00:55:41.247984  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 8/120
	I1212 00:55:42.249619  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 9/120
	I1212 00:55:43.251952  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 10/120
	I1212 00:55:44.254090  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 11/120
	I1212 00:55:45.255534  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 12/120
	I1212 00:55:46.256809  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 13/120
	I1212 00:55:47.258276  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 14/120
	I1212 00:55:48.260355  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 15/120
	I1212 00:55:49.261857  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 16/120
	I1212 00:55:50.263358  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 17/120
	I1212 00:55:51.264687  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 18/120
	I1212 00:55:52.266058  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 19/120
	I1212 00:55:53.268319  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 20/120
	I1212 00:55:54.269748  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 21/120
	I1212 00:55:55.271077  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 22/120
	I1212 00:55:56.272481  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 23/120
	I1212 00:55:57.273693  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 24/120
	I1212 00:55:58.275438  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 25/120
	I1212 00:55:59.276770  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 26/120
	I1212 00:56:00.278457  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 27/120
	I1212 00:56:01.279869  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 28/120
	I1212 00:56:02.281334  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 29/120
	I1212 00:56:03.283089  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 30/120
	I1212 00:56:04.284495  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 31/120
	I1212 00:56:05.286200  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 32/120
	I1212 00:56:06.287410  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 33/120
	I1212 00:56:07.288829  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 34/120
	I1212 00:56:08.291184  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 35/120
	I1212 00:56:09.292735  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 36/120
	I1212 00:56:10.294294  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 37/120
	I1212 00:56:11.295714  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 38/120
	I1212 00:56:12.297621  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 39/120
	I1212 00:56:13.299271  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 40/120
	I1212 00:56:14.300703  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 41/120
	I1212 00:56:15.302118  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 42/120
	I1212 00:56:16.303349  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 43/120
	I1212 00:56:17.304839  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 44/120
	I1212 00:56:18.306605  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 45/120
	I1212 00:56:19.308085  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 46/120
	I1212 00:56:20.309879  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 47/120
	I1212 00:56:21.311141  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 48/120
	I1212 00:56:22.312402  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 49/120
	I1212 00:56:23.314568  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 50/120
	I1212 00:56:24.315977  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 51/120
	I1212 00:56:25.318089  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 52/120
	I1212 00:56:26.319251  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 53/120
	I1212 00:56:27.320564  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 54/120
	I1212 00:56:28.322355  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 55/120
	I1212 00:56:29.323794  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 56/120
	I1212 00:56:30.326078  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 57/120
	I1212 00:56:31.327408  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 58/120
	I1212 00:56:32.328704  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 59/120
	I1212 00:56:33.330540  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 60/120
	I1212 00:56:34.331983  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 61/120
	I1212 00:56:35.334083  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 62/120
	I1212 00:56:36.335332  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 63/120
	I1212 00:56:37.336643  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 64/120
	I1212 00:56:38.338518  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 65/120
	I1212 00:56:39.340166  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 66/120
	I1212 00:56:40.341652  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 67/120
	I1212 00:56:41.343008  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 68/120
	I1212 00:56:42.344588  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 69/120
	I1212 00:56:43.346220  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 70/120
	I1212 00:56:44.348064  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 71/120
	I1212 00:56:45.350537  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 72/120
	I1212 00:56:46.351964  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 73/120
	I1212 00:56:47.353381  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 74/120
	I1212 00:56:48.355414  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 75/120
	I1212 00:56:49.356940  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 76/120
	I1212 00:56:50.358230  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 77/120
	I1212 00:56:51.359568  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 78/120
	I1212 00:56:52.361057  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 79/120
	I1212 00:56:53.363258  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 80/120
	I1212 00:56:54.364869  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 81/120
	I1212 00:56:55.366466  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 82/120
	I1212 00:56:56.367689  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 83/120
	I1212 00:56:57.369124  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 84/120
	I1212 00:56:58.370988  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 85/120
	I1212 00:56:59.372177  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 86/120
	I1212 00:57:00.373589  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 87/120
	I1212 00:57:01.375028  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 88/120
	I1212 00:57:02.376252  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 89/120
	I1212 00:57:03.378204  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 90/120
	I1212 00:57:04.379472  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 91/120
	I1212 00:57:05.380791  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 92/120
	I1212 00:57:06.382274  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 93/120
	I1212 00:57:07.383616  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 94/120
	I1212 00:57:08.385522  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 95/120
	I1212 00:57:09.386856  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 96/120
	I1212 00:57:10.388139  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 97/120
	I1212 00:57:11.389455  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 98/120
	I1212 00:57:12.390863  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 99/120
	I1212 00:57:13.392987  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 100/120
	I1212 00:57:14.394362  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 101/120
	I1212 00:57:15.395819  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 102/120
	I1212 00:57:16.397086  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 103/120
	I1212 00:57:17.398442  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 104/120
	I1212 00:57:18.400397  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 105/120
	I1212 00:57:19.401878  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 106/120
	I1212 00:57:20.403125  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 107/120
	I1212 00:57:21.404434  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 108/120
	I1212 00:57:22.405682  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 109/120
	I1212 00:57:23.407780  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 110/120
	I1212 00:57:24.409078  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 111/120
	I1212 00:57:25.410652  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 112/120
	I1212 00:57:26.412021  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 113/120
	I1212 00:57:27.413277  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 114/120
	I1212 00:57:28.415302  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 115/120
	I1212 00:57:29.416721  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 116/120
	I1212 00:57:30.417899  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 117/120
	I1212 00:57:31.419167  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 118/120
	I1212 00:57:32.420550  140202 main.go:141] libmachine: (embed-certs-607268) Waiting for machine to stop 119/120
	I1212 00:57:33.421764  140202 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1212 00:57:33.421836  140202 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 00:57:33.423804  140202 out.go:201] 
	W1212 00:57:33.425111  140202 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 00:57:33.425125  140202 out.go:270] * 
	* 
	W1212 00:57:33.428215  140202 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:57:33.429718  140202 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-607268 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268
E1212 00:57:46.618080   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268: exit status 3 (18.544729766s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:51.975901  140991 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host
	E1212 00:57:51.975921  140991 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-607268" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-076578 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-076578 --alsologtostderr -v=3: exit status 82 (2m0.53581909s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-076578"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:56:45.195665  140762 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:56:45.195798  140762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:56:45.195808  140762 out.go:358] Setting ErrFile to fd 2...
	I1212 00:56:45.195812  140762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:56:45.195977  140762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:56:45.196196  140762 out.go:352] Setting JSON to false
	I1212 00:56:45.196273  140762 mustload.go:65] Loading cluster: default-k8s-diff-port-076578
	I1212 00:56:45.196633  140762 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:56:45.196698  140762 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 00:56:45.196863  140762 mustload.go:65] Loading cluster: default-k8s-diff-port-076578
	I1212 00:56:45.196965  140762 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:56:45.196991  140762 stop.go:39] StopHost: default-k8s-diff-port-076578
	I1212 00:56:45.197313  140762 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:56:45.197368  140762 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:56:45.212870  140762 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I1212 00:56:45.213344  140762 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:56:45.213964  140762 main.go:141] libmachine: Using API Version  1
	I1212 00:56:45.213990  140762 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:56:45.214311  140762 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:56:45.216718  140762 out.go:177] * Stopping node "default-k8s-diff-port-076578"  ...
	I1212 00:56:45.218131  140762 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1212 00:56:45.218164  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 00:56:45.218366  140762 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1212 00:56:45.218410  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 00:56:45.221253  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 00:56:45.221688  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 01:55:47 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 00:56:45.221711  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 00:56:45.221903  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 00:56:45.222066  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 00:56:45.222225  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 00:56:45.222390  140762 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 00:56:45.312785  140762 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1212 00:56:45.418666  140762 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1212 00:56:45.482800  140762 main.go:141] libmachine: Stopping "default-k8s-diff-port-076578"...
	I1212 00:56:45.482830  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 00:56:45.484961  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Stop
	I1212 00:56:45.488654  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 0/120
	I1212 00:56:46.489983  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 1/120
	I1212 00:56:47.491215  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 2/120
	I1212 00:56:48.492549  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 3/120
	I1212 00:56:49.493754  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 4/120
	I1212 00:56:50.496049  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 5/120
	I1212 00:56:51.497385  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 6/120
	I1212 00:56:52.498461  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 7/120
	I1212 00:56:53.499790  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 8/120
	I1212 00:56:54.501504  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 9/120
	I1212 00:56:55.503584  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 10/120
	I1212 00:56:56.505002  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 11/120
	I1212 00:56:57.506119  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 12/120
	I1212 00:56:58.507518  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 13/120
	I1212 00:56:59.508710  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 14/120
	I1212 00:57:00.510655  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 15/120
	I1212 00:57:01.512157  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 16/120
	I1212 00:57:02.513867  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 17/120
	I1212 00:57:03.515207  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 18/120
	I1212 00:57:04.517140  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 19/120
	I1212 00:57:05.519153  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 20/120
	I1212 00:57:06.520592  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 21/120
	I1212 00:57:07.522005  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 22/120
	I1212 00:57:08.523425  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 23/120
	I1212 00:57:09.524770  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 24/120
	I1212 00:57:10.526833  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 25/120
	I1212 00:57:11.528148  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 26/120
	I1212 00:57:12.529948  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 27/120
	I1212 00:57:13.531506  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 28/120
	I1212 00:57:14.532778  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 29/120
	I1212 00:57:15.534768  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 30/120
	I1212 00:57:16.536212  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 31/120
	I1212 00:57:17.538236  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 32/120
	I1212 00:57:18.539710  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 33/120
	I1212 00:57:19.540921  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 34/120
	I1212 00:57:20.542797  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 35/120
	I1212 00:57:21.544240  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 36/120
	I1212 00:57:22.546065  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 37/120
	I1212 00:57:23.547465  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 38/120
	I1212 00:57:24.548709  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 39/120
	I1212 00:57:25.550813  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 40/120
	I1212 00:57:26.552159  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 41/120
	I1212 00:57:27.553984  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 42/120
	I1212 00:57:28.555247  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 43/120
	I1212 00:57:29.556678  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 44/120
	I1212 00:57:30.558574  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 45/120
	I1212 00:57:31.559997  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 46/120
	I1212 00:57:32.562018  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 47/120
	I1212 00:57:33.563817  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 48/120
	I1212 00:57:34.565179  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 49/120
	I1212 00:57:35.567230  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 50/120
	I1212 00:57:36.568861  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 51/120
	I1212 00:57:37.570052  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 52/120
	I1212 00:57:38.571589  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 53/120
	I1212 00:57:39.573013  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 54/120
	I1212 00:57:40.574945  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 55/120
	I1212 00:57:41.576405  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 56/120
	I1212 00:57:42.577915  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 57/120
	I1212 00:57:43.579766  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 58/120
	I1212 00:57:44.581071  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 59/120
	I1212 00:57:45.583252  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 60/120
	I1212 00:57:46.584653  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 61/120
	I1212 00:57:47.585954  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 62/120
	I1212 00:57:48.587442  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 63/120
	I1212 00:57:49.588953  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 64/120
	I1212 00:57:50.591033  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 65/120
	I1212 00:57:51.592385  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 66/120
	I1212 00:57:52.593686  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 67/120
	I1212 00:57:53.595029  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 68/120
	I1212 00:57:54.596437  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 69/120
	I1212 00:57:55.597823  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 70/120
	I1212 00:57:56.599112  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 71/120
	I1212 00:57:57.600412  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 72/120
	I1212 00:57:58.602246  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 73/120
	I1212 00:57:59.603433  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 74/120
	I1212 00:58:00.604982  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 75/120
	I1212 00:58:01.606236  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 76/120
	I1212 00:58:02.607428  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 77/120
	I1212 00:58:03.608897  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 78/120
	I1212 00:58:04.610109  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 79/120
	I1212 00:58:05.612162  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 80/120
	I1212 00:58:06.613466  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 81/120
	I1212 00:58:07.614791  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 82/120
	I1212 00:58:08.616666  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 83/120
	I1212 00:58:09.617969  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 84/120
	I1212 00:58:10.619984  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 85/120
	I1212 00:58:11.622141  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 86/120
	I1212 00:58:12.623364  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 87/120
	I1212 00:58:13.624767  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 88/120
	I1212 00:58:14.626039  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 89/120
	I1212 00:58:15.628218  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 90/120
	I1212 00:58:16.629510  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 91/120
	I1212 00:58:17.630684  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 92/120
	I1212 00:58:18.632058  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 93/120
	I1212 00:58:19.633381  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 94/120
	I1212 00:58:20.635203  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 95/120
	I1212 00:58:21.636613  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 96/120
	I1212 00:58:22.637976  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 97/120
	I1212 00:58:23.639316  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 98/120
	I1212 00:58:24.640689  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 99/120
	I1212 00:58:25.642739  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 100/120
	I1212 00:58:26.644078  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 101/120
	I1212 00:58:27.645460  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 102/120
	I1212 00:58:28.646880  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 103/120
	I1212 00:58:29.648411  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 104/120
	I1212 00:58:30.650485  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 105/120
	I1212 00:58:31.651868  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 106/120
	I1212 00:58:32.654323  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 107/120
	I1212 00:58:33.655759  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 108/120
	I1212 00:58:34.658155  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 109/120
	I1212 00:58:35.660710  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 110/120
	I1212 00:58:36.662232  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 111/120
	I1212 00:58:37.663760  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 112/120
	I1212 00:58:38.665124  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 113/120
	I1212 00:58:39.666595  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 114/120
	I1212 00:58:40.668985  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 115/120
	I1212 00:58:41.670391  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 116/120
	I1212 00:58:42.671875  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 117/120
	I1212 00:58:43.673222  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 118/120
	I1212 00:58:44.674527  140762 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for machine to stop 119/120
	I1212 00:58:45.675244  140762 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1212 00:58:45.675309  140762 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1212 00:58:45.677192  140762 out.go:201] 
	W1212 00:58:45.678669  140762 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1212 00:58:45.678683  140762 out.go:270] * 
	* 
	W1212 00:58:45.681943  140762 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 00:58:45.683376  140762 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-076578 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578: exit status 3 (18.48240813s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:59:04.167955  141671 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E1212 00:59:04.167986  141671 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-076578" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725: exit status 3 (3.167776542s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:52.839925  141073 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host
	E1212 00:57:52.839950  141073 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-242725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-242725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153252056s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-242725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725: exit status 3 (3.062449251s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:58:02.055947  141350 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host
	E1212 00:58:02.055966  141350 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-242725" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268: exit status 3 (3.167872146s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:55.143873  141112 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host
	E1212 00:57:55.143894  141112 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-607268 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-607268 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154686075s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-607268 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268: exit status 3 (3.060960892s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:58:04.359941  141381 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host
	E1212 00:58:04.359967  141381 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.151:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-607268" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-738445 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-738445 create -f testdata/busybox.yaml: exit status 1 (42.922046ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-738445" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-738445 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 6 (215.40735ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:55.974589  141241 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-738445" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 6 (219.716119ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:57:56.194749  141272 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-738445" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-738445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-738445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m46.227269265s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-738445 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-738445 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-738445 describe deploy/metrics-server -n kube-system: exit status 1 (44.434333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-738445" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-738445 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 6 (222.422915ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:59:42.687499  142012 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-738445" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (106.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578: exit status 3 (3.16777426s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:59:07.335895  141767 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E1212 00:59:07.335917  141767 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-076578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-076578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153322975s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-076578 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578: exit status 3 (3.062789844s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 00:59:16.551973  141853 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E1212 00:59:16.551991  141853 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-076578" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (730.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-738445 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1212 01:02:46.617646   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:02:55.697752   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:07:46.617628   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:07:55.697633   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-738445 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m7.052694785s)

                                                
                                                
-- stdout --
	* [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	* 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	* 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-738445 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (241.287387ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-738445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-738445 logs -n 25: (1.519682726s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-000053 -- sudo                         | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-000053                                 | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:59:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:42.763823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:59:48.839858  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:51.911930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:57.991816  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:01.063931  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:07.143823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:10.215896  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:16.295837  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:19.367812  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:25.447920  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:28.519965  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:34.599875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:37.671930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:43.751927  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:46.823861  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:52.903942  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:55.975967  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:02.055889  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:05.127830  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:11.207862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:14.279940  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:20.359871  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:23.431885  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:29.511831  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:32.583875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:38.663880  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:41.735869  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:47.815810  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:50.887937  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:56.967886  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:00.039916  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:06.119870  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:09.191917  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:15.271841  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:18.343881  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:24.423844  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:27.495936  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:33.575851  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:36.647862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:39.652816  141469 start.go:364] duration metric: took 4m35.142362604s to acquireMachinesLock for "embed-certs-607268"
	I1212 01:02:39.652891  141469 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:39.652902  141469 fix.go:54] fixHost starting: 
	I1212 01:02:39.653292  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:39.653345  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:39.668953  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1212 01:02:39.669389  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:39.669880  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:02:39.669906  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:39.670267  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:39.670428  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:39.670550  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:02:39.671952  141469 fix.go:112] recreateIfNeeded on embed-certs-607268: state=Stopped err=<nil>
	I1212 01:02:39.671994  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	W1212 01:02:39.672154  141469 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:39.677119  141469 out.go:177] * Restarting existing kvm2 VM for "embed-certs-607268" ...
	I1212 01:02:39.650358  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:39.650413  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650700  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:02:39.650731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650949  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:02:39.652672  141411 machine.go:96] duration metric: took 4m37.426998938s to provisionDockerMachine
	I1212 01:02:39.652723  141411 fix.go:56] duration metric: took 4m37.447585389s for fixHost
	I1212 01:02:39.652731  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 4m37.447868317s
	W1212 01:02:39.652756  141411 start.go:714] error starting host: provision: host is not running
	W1212 01:02:39.652909  141411 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1212 01:02:39.652919  141411 start.go:729] Will try again in 5 seconds ...
	I1212 01:02:39.682230  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Start
	I1212 01:02:39.682424  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring networks are active...
	I1212 01:02:39.683293  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network default is active
	I1212 01:02:39.683713  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network mk-embed-certs-607268 is active
	I1212 01:02:39.684046  141469 main.go:141] libmachine: (embed-certs-607268) Getting domain xml...
	I1212 01:02:39.684631  141469 main.go:141] libmachine: (embed-certs-607268) Creating domain...
	I1212 01:02:40.886712  141469 main.go:141] libmachine: (embed-certs-607268) Waiting to get IP...
	I1212 01:02:40.887814  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:40.888208  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:40.888304  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:40.888203  142772 retry.go:31] will retry after 273.835058ms: waiting for machine to come up
	I1212 01:02:41.164102  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.164574  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.164604  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.164545  142772 retry.go:31] will retry after 260.789248ms: waiting for machine to come up
	I1212 01:02:41.427069  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.427486  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.427513  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.427449  142772 retry.go:31] will retry after 330.511025ms: waiting for machine to come up
	I1212 01:02:41.760038  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.760388  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.760434  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.760337  142772 retry.go:31] will retry after 564.656792ms: waiting for machine to come up
	I1212 01:02:42.327037  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.327545  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.327567  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.327505  142772 retry.go:31] will retry after 473.714754ms: waiting for machine to come up
	I1212 01:02:42.803228  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.803607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.803639  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.803548  142772 retry.go:31] will retry after 872.405168ms: waiting for machine to come up
	I1212 01:02:43.677522  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:43.677891  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:43.677919  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:43.677833  142772 retry.go:31] will retry after 1.092518369s: waiting for machine to come up
	I1212 01:02:44.655390  141411 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:02:44.771319  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:44.771721  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:44.771751  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:44.771666  142772 retry.go:31] will retry after 1.147907674s: waiting for machine to come up
	I1212 01:02:45.921165  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:45.921636  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:45.921666  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:45.921589  142772 retry.go:31] will retry after 1.69009103s: waiting for machine to come up
	I1212 01:02:47.614391  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:47.614838  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:47.614863  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:47.614792  142772 retry.go:31] will retry after 1.65610672s: waiting for machine to come up
	I1212 01:02:49.272864  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:49.273312  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:49.273337  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:49.273268  142772 retry.go:31] will retry after 2.50327667s: waiting for machine to come up
	I1212 01:02:51.779671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:51.780077  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:51.780104  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:51.780016  142772 retry.go:31] will retry after 2.808303717s: waiting for machine to come up
	I1212 01:02:54.591866  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:54.592241  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:54.592285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:54.592208  142772 retry.go:31] will retry after 3.689107313s: waiting for machine to come up
	I1212 01:02:58.282552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.282980  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has current primary IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.283005  141469 main.go:141] libmachine: (embed-certs-607268) Found IP for machine: 192.168.50.151
	I1212 01:02:58.283018  141469 main.go:141] libmachine: (embed-certs-607268) Reserving static IP address...
	I1212 01:02:58.283419  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.283441  141469 main.go:141] libmachine: (embed-certs-607268) Reserved static IP address: 192.168.50.151
	I1212 01:02:58.283453  141469 main.go:141] libmachine: (embed-certs-607268) DBG | skip adding static IP to network mk-embed-certs-607268 - found existing host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"}
	I1212 01:02:58.283462  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Getting to WaitForSSH function...
	I1212 01:02:58.283469  141469 main.go:141] libmachine: (embed-certs-607268) Waiting for SSH to be available...
	I1212 01:02:58.285792  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286126  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.286162  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286298  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH client type: external
	I1212 01:02:58.286330  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa (-rw-------)
	I1212 01:02:58.286378  141469 main.go:141] libmachine: (embed-certs-607268) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:02:58.286394  141469 main.go:141] libmachine: (embed-certs-607268) DBG | About to run SSH command:
	I1212 01:02:58.286403  141469 main.go:141] libmachine: (embed-certs-607268) DBG | exit 0
	I1212 01:02:58.407633  141469 main.go:141] libmachine: (embed-certs-607268) DBG | SSH cmd err, output: <nil>: 
	I1212 01:02:58.407985  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetConfigRaw
	I1212 01:02:58.408745  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.411287  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.411642  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411920  141469 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 01:02:58.412117  141469 machine.go:93] provisionDockerMachine start ...
	I1212 01:02:58.412136  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:58.412336  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.414338  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414643  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.414669  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414765  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.414944  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415114  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415259  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.415450  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.415712  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.415724  141469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:02:58.520032  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:02:58.520068  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520312  141469 buildroot.go:166] provisioning hostname "embed-certs-607268"
	I1212 01:02:58.520341  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520539  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.523169  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.523584  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523733  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.523910  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524092  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524252  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.524411  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.524573  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.524584  141469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-607268 && echo "embed-certs-607268" | sudo tee /etc/hostname
	I1212 01:02:58.642175  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-607268
	
	I1212 01:02:58.642232  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.645114  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645480  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.645505  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645698  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.645909  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646063  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646192  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.646334  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.646513  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.646530  141469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-607268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-607268/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-607268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:02:58.758655  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:58.758696  141469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:02:58.758715  141469 buildroot.go:174] setting up certificates
	I1212 01:02:58.758726  141469 provision.go:84] configureAuth start
	I1212 01:02:58.758735  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.759031  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.761749  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762024  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.762052  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762165  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.764356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.764699  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764781  141469 provision.go:143] copyHostCerts
	I1212 01:02:58.764874  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:02:58.764898  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:02:58.764986  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:02:58.765107  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:02:58.765118  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:02:58.765160  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:02:58.765235  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:02:58.765245  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:02:58.765296  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:02:58.765369  141469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-607268 san=[127.0.0.1 192.168.50.151 embed-certs-607268 localhost minikube]
	I1212 01:02:58.890412  141469 provision.go:177] copyRemoteCerts
	I1212 01:02:58.890519  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:02:58.890560  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.892973  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893262  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.893291  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893471  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.893647  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.893761  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.893855  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:58.973652  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:02:58.998097  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:02:59.022028  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:02:59.045859  141469 provision.go:87] duration metric: took 287.094036ms to configureAuth
	I1212 01:02:59.045892  141469 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:02:59.046119  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:02:59.046242  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.048869  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049255  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.049285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049465  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.049642  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049764  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049864  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.049974  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.050181  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.050198  141469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:02:59.276670  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:02:59.276708  141469 machine.go:96] duration metric: took 864.577145ms to provisionDockerMachine
	I1212 01:02:59.276724  141469 start.go:293] postStartSetup for "embed-certs-607268" (driver="kvm2")
	I1212 01:02:59.276747  141469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:02:59.276780  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.277171  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:02:59.277207  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.279974  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280341  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.280387  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280529  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.280738  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.280897  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.281026  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.363091  141469 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:02:59.367476  141469 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:02:59.367503  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:02:59.367618  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:02:59.367749  141469 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:02:59.367844  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:02:59.377895  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:02:59.402410  141469 start.go:296] duration metric: took 125.668908ms for postStartSetup
	I1212 01:02:59.402462  141469 fix.go:56] duration metric: took 19.749561015s for fixHost
	I1212 01:02:59.402485  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.405057  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.405385  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405624  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.405808  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.405974  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.406094  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.406237  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.406423  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.406436  141469 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:02:59.516697  141884 start.go:364] duration metric: took 3m42.810720852s to acquireMachinesLock for "default-k8s-diff-port-076578"
	I1212 01:02:59.516759  141884 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:59.516773  141884 fix.go:54] fixHost starting: 
	I1212 01:02:59.517192  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:59.517241  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:59.533969  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 01:02:59.534367  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:59.534831  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:02:59.534854  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:59.535165  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:59.535362  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:02:59.535499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:02:59.536930  141884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-076578: state=Stopped err=<nil>
	I1212 01:02:59.536951  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	W1212 01:02:59.537093  141884 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:59.538974  141884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-076578" ...
	I1212 01:02:59.516496  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965379.489556963
	
	I1212 01:02:59.516525  141469 fix.go:216] guest clock: 1733965379.489556963
	I1212 01:02:59.516535  141469 fix.go:229] Guest: 2024-12-12 01:02:59.489556963 +0000 UTC Remote: 2024-12-12 01:02:59.40246635 +0000 UTC m=+295.033602018 (delta=87.090613ms)
	I1212 01:02:59.516574  141469 fix.go:200] guest clock delta is within tolerance: 87.090613ms
	I1212 01:02:59.516580  141469 start.go:83] releasing machines lock for "embed-certs-607268", held for 19.863720249s
	I1212 01:02:59.516605  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.516828  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:59.519731  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520075  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.520111  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520202  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520752  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520921  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.521064  141469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:02:59.521131  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.521155  141469 ssh_runner.go:195] Run: cat /version.json
	I1212 01:02:59.521171  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.523724  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.523971  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524036  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524063  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524221  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524374  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524375  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524401  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524553  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.524562  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524719  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524719  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.524837  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.525000  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.628168  141469 ssh_runner.go:195] Run: systemctl --version
	I1212 01:02:59.635800  141469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:02:59.788137  141469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:02:59.795216  141469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:02:59.795289  141469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:02:59.811889  141469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:02:59.811917  141469 start.go:495] detecting cgroup driver to use...
	I1212 01:02:59.811992  141469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:02:59.827154  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:02:59.841138  141469 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:02:59.841193  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:02:59.854874  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:02:59.869250  141469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:02:59.994723  141469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:00.136385  141469 docker.go:233] disabling docker service ...
	I1212 01:03:00.136462  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:00.150966  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:00.163907  141469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:00.340171  141469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:00.480828  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:00.498056  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:00.518273  141469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:00.518339  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.529504  141469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:00.529571  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.540616  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.553419  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.566004  141469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:00.577682  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.589329  141469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.612561  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.625526  141469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:00.635229  141469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:00.635289  141469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:00.657569  141469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:00.669982  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:00.793307  141469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:00.887423  141469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:00.887498  141469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:00.892715  141469 start.go:563] Will wait 60s for crictl version
	I1212 01:03:00.892773  141469 ssh_runner.go:195] Run: which crictl
	I1212 01:03:00.896646  141469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:00.933507  141469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:00.933606  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:00.977011  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:01.008491  141469 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:02:59.540301  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Start
	I1212 01:02:59.540482  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring networks are active...
	I1212 01:02:59.541181  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network default is active
	I1212 01:02:59.541503  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network mk-default-k8s-diff-port-076578 is active
	I1212 01:02:59.541802  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Getting domain xml...
	I1212 01:02:59.542437  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Creating domain...
	I1212 01:03:00.796803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting to get IP...
	I1212 01:03:00.797932  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798386  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798495  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.798404  142917 retry.go:31] will retry after 199.022306ms: waiting for machine to come up
	I1212 01:03:00.999067  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999547  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999572  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.999499  142917 retry.go:31] will retry after 340.093067ms: waiting for machine to come up
	I1212 01:03:01.340839  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341513  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.341437  142917 retry.go:31] will retry after 469.781704ms: waiting for machine to come up
	I1212 01:03:01.009956  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:03:01.012767  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013224  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:03:01.013252  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013471  141469 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:01.017815  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:01.032520  141469 kubeadm.go:883] updating cluster {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:01.032662  141469 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:01.032715  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:01.070406  141469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:01.070478  141469 ssh_runner.go:195] Run: which lz4
	I1212 01:03:01.074840  141469 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:01.079207  141469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:01.079238  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:02.524822  141469 crio.go:462] duration metric: took 1.450020274s to copy over tarball
	I1212 01:03:02.524909  141469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:01.812803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813298  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813335  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.813232  142917 retry.go:31] will retry after 552.327376ms: waiting for machine to come up
	I1212 01:03:02.367682  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368152  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368187  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:02.368106  142917 retry.go:31] will retry after 629.731283ms: waiting for machine to come up
	I1212 01:03:02.999887  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000307  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000339  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.000235  142917 retry.go:31] will retry after 764.700679ms: waiting for machine to come up
	I1212 01:03:03.766389  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766891  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766919  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.766845  142917 retry.go:31] will retry after 920.806371ms: waiting for machine to come up
	I1212 01:03:04.689480  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690029  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:04.689996  142917 retry.go:31] will retry after 1.194297967s: waiting for machine to come up
	I1212 01:03:05.886345  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886729  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886796  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:05.886714  142917 retry.go:31] will retry after 1.60985804s: waiting for machine to come up
	I1212 01:03:04.719665  141469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194717299s)
	I1212 01:03:04.719708  141469 crio.go:469] duration metric: took 2.194851225s to extract the tarball
	I1212 01:03:04.719719  141469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:04.756600  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:04.802801  141469 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:04.802832  141469 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:04.802840  141469 kubeadm.go:934] updating node { 192.168.50.151 8443 v1.31.2 crio true true} ...
	I1212 01:03:04.802949  141469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-607268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:04.803008  141469 ssh_runner.go:195] Run: crio config
	I1212 01:03:04.854778  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:04.854804  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:04.854815  141469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:04.854836  141469 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.151 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-607268 NodeName:embed-certs-607268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:04.854962  141469 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-607268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:04.855023  141469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:04.864877  141469 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:04.864967  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:04.874503  141469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1212 01:03:04.891124  141469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:04.907560  141469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1212 01:03:04.924434  141469 ssh_runner.go:195] Run: grep 192.168.50.151	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:04.928518  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:04.940523  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:05.076750  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:05.094388  141469 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268 for IP: 192.168.50.151
	I1212 01:03:05.094424  141469 certs.go:194] generating shared ca certs ...
	I1212 01:03:05.094440  141469 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:05.094618  141469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:05.094691  141469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:05.094710  141469 certs.go:256] generating profile certs ...
	I1212 01:03:05.094833  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/client.key
	I1212 01:03:05.094916  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key.9253237b
	I1212 01:03:05.094968  141469 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key
	I1212 01:03:05.095131  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:05.095177  141469 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:05.095192  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:05.095224  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:05.095254  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:05.095293  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:05.095359  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:05.096133  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:05.130605  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:05.164694  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:05.206597  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:05.241305  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:03:05.270288  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:05.296137  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:05.320158  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:05.343820  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:05.373277  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:05.397070  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:05.420738  141469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:05.437822  141469 ssh_runner.go:195] Run: openssl version
	I1212 01:03:05.443744  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:05.454523  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459182  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459237  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.465098  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:05.475681  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:05.486396  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490883  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490929  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.496613  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:05.507295  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:05.517980  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522534  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522590  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.528117  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:05.538979  141469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:05.543723  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:05.549611  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:05.555445  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:05.561482  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:05.567221  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:05.573015  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:05.578902  141469 kubeadm.go:392] StartCluster: {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:05.578984  141469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:05.579042  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.619027  141469 cri.go:89] found id: ""
	I1212 01:03:05.619115  141469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:05.629472  141469 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:05.629501  141469 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:05.629567  141469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:05.639516  141469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:05.640491  141469 kubeconfig.go:125] found "embed-certs-607268" server: "https://192.168.50.151:8443"
	I1212 01:03:05.642468  141469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:05.651891  141469 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.151
	I1212 01:03:05.651922  141469 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:05.651934  141469 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:05.651978  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.686414  141469 cri.go:89] found id: ""
	I1212 01:03:05.686501  141469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:05.702724  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:05.712454  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:05.712480  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:05.712531  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:05.721529  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:05.721603  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:05.730897  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:05.739758  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:05.739815  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:05.749089  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.758042  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:05.758104  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.767425  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:05.776195  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:05.776260  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:05.785435  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:05.794795  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:05.918710  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:06.846975  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.072898  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.139677  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.237216  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:07.237336  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:07.738145  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.238219  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.255671  141469 api_server.go:72] duration metric: took 1.018455783s to wait for apiserver process to appear ...
	I1212 01:03:08.255705  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:08.255734  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:08.256408  141469 api_server.go:269] stopped: https://192.168.50.151:8443/healthz: Get "https://192.168.50.151:8443/healthz": dial tcp 192.168.50.151:8443: connect: connection refused
	I1212 01:03:08.756070  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:07.498527  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498942  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498966  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:07.498889  142917 retry.go:31] will retry after 2.278929136s: waiting for machine to come up
	I1212 01:03:09.779321  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779847  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779879  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:09.779793  142917 retry.go:31] will retry after 1.82028305s: waiting for machine to come up
	I1212 01:03:10.630080  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.630121  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.630140  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.674408  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.674470  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.756660  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.763043  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:10.763088  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.256254  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.263457  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.263481  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.756759  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.764019  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.764053  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:12.256627  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:12.262369  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:03:12.270119  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:12.270153  141469 api_server.go:131] duration metric: took 4.014438706s to wait for apiserver health ...
	I1212 01:03:12.270164  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:12.270172  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:12.272148  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:12.273667  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:12.289376  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:12.312620  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:12.323663  141469 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:12.323715  141469 system_pods.go:61] "coredns-7c65d6cfc9-n66x6" [ae2c1ac7-0c17-453d-a05c-70fbf6d81e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:12.323727  141469 system_pods.go:61] "etcd-embed-certs-607268" [811dc3d0-d893-45a0-a5c7-3fee0efd2e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:12.323747  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [11848f2c-215b-4cf4-88f0-93151c55e7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:12.323764  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [4f4066ab-b6e4-4a46-a03b-dda1776c39ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:12.323776  141469 system_pods.go:61] "kube-proxy-9f6lj" [2463030a-d7db-4031-9e26-0a56a9067520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:12.323784  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [c2aeaf4a-7fb8-4bb8-87ea-5401db017fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:12.323795  141469 system_pods.go:61] "metrics-server-6867b74b74-5bms9" [e1a794f9-cf60-486f-a0e8-670dc7dfb4da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:12.323803  141469 system_pods.go:61] "storage-provisioner" [b29860cd-465d-4e70-ad5d-dd17c22ae290] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:12.323820  141469 system_pods.go:74] duration metric: took 11.170811ms to wait for pod list to return data ...
	I1212 01:03:12.323845  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:12.327828  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:12.327863  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:12.327880  141469 node_conditions.go:105] duration metric: took 4.029256ms to run NodePressure ...
	I1212 01:03:12.327902  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:12.638709  141469 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644309  141469 kubeadm.go:739] kubelet initialised
	I1212 01:03:12.644332  141469 kubeadm.go:740] duration metric: took 5.590168ms waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644356  141469 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:12.650768  141469 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:11.601456  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602012  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602044  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:11.601956  142917 retry.go:31] will retry after 2.272258384s: waiting for machine to come up
	I1212 01:03:13.876607  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.876986  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.877024  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:13.876950  142917 retry.go:31] will retry after 4.014936005s: waiting for machine to come up
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:14.657097  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:16.658207  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:17.657933  141469 pod_ready.go:93] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:17.657957  141469 pod_ready.go:82] duration metric: took 5.007165494s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:17.657966  141469 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:17.896127  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896639  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Found IP for machine: 192.168.39.174
	I1212 01:03:17.896659  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserving static IP address...
	I1212 01:03:17.897028  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.897062  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserved static IP address: 192.168.39.174
	I1212 01:03:17.897087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | skip adding static IP to network mk-default-k8s-diff-port-076578 - found existing host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"}
	I1212 01:03:17.897108  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Getting to WaitForSSH function...
	I1212 01:03:17.897126  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for SSH to be available...
	I1212 01:03:17.899355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899727  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.899754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899911  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH client type: external
	I1212 01:03:17.899941  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa (-rw-------)
	I1212 01:03:17.899976  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:17.899989  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | About to run SSH command:
	I1212 01:03:17.900005  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | exit 0
	I1212 01:03:18.036261  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:18.036610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetConfigRaw
	I1212 01:03:18.037352  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.040173  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040570  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.040595  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040866  141884 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 01:03:18.041107  141884 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:18.041134  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.041355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.043609  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.043945  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.043973  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.044142  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.044291  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044466  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.044745  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.044986  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.045002  141884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:18.156161  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:18.156193  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156472  141884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-076578"
	I1212 01:03:18.156499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.159391  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.159871  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.159903  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.160048  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.160244  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160379  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160500  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.160681  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.160898  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.160917  141884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-076578 && echo "default-k8s-diff-port-076578" | sudo tee /etc/hostname
	I1212 01:03:18.285904  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-076578
	
	I1212 01:03:18.285937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.288620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.288987  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.289010  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.289285  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.289491  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289658  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289799  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.289981  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.290190  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.290223  141884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-076578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-076578/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-076578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:18.409683  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:18.409721  141884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:18.409751  141884 buildroot.go:174] setting up certificates
	I1212 01:03:18.409761  141884 provision.go:84] configureAuth start
	I1212 01:03:18.409782  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.410045  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.412393  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412721  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.412756  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.415204  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415502  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.415530  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415663  141884 provision.go:143] copyHostCerts
	I1212 01:03:18.415735  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:18.415757  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:18.415832  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:18.415925  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:18.415933  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:18.415952  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:18.416007  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:18.416015  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:18.416032  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:18.416081  141884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-076578 san=[127.0.0.1 192.168.39.174 default-k8s-diff-port-076578 localhost minikube]
	I1212 01:03:18.502493  141884 provision.go:177] copyRemoteCerts
	I1212 01:03:18.502562  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:18.502594  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.505104  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505377  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.505409  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505568  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.505754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.505892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.506034  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.590425  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:18.616850  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 01:03:18.640168  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:18.664517  141884 provision.go:87] duration metric: took 254.738256ms to configureAuth
	I1212 01:03:18.664542  141884 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:18.664705  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:03:18.664778  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.667425  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.667784  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.667808  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.668004  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.668178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668313  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668448  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.668587  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.668751  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.668772  141884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:18.906880  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:18.906908  141884 machine.go:96] duration metric: took 865.784426ms to provisionDockerMachine
	I1212 01:03:18.906920  141884 start.go:293] postStartSetup for "default-k8s-diff-port-076578" (driver="kvm2")
	I1212 01:03:18.906931  141884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:18.906949  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.907315  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:18.907348  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.909882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910213  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.910242  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910347  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.910542  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.910680  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.910806  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.994819  141884 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:18.998959  141884 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:18.998989  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:18.999069  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:18.999163  141884 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:18.999252  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:19.009226  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:19.032912  141884 start.go:296] duration metric: took 125.973128ms for postStartSetup
	I1212 01:03:19.032960  141884 fix.go:56] duration metric: took 19.516187722s for fixHost
	I1212 01:03:19.032990  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.035623  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.035947  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.035977  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.036151  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.036310  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036438  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036581  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.036738  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:19.036906  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:19.036919  141884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:19.148565  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965399.101726035
	
	I1212 01:03:19.148592  141884 fix.go:216] guest clock: 1733965399.101726035
	I1212 01:03:19.148602  141884 fix.go:229] Guest: 2024-12-12 01:03:19.101726035 +0000 UTC Remote: 2024-12-12 01:03:19.032967067 +0000 UTC m=+242.472137495 (delta=68.758968ms)
	I1212 01:03:19.148628  141884 fix.go:200] guest clock delta is within tolerance: 68.758968ms
	I1212 01:03:19.148635  141884 start.go:83] releasing machines lock for "default-k8s-diff-port-076578", held for 19.631903968s
	I1212 01:03:19.148688  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.149016  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:19.151497  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.151926  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.151954  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.152124  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152598  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152762  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152834  141884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:19.152892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.152952  141884 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:19.152972  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.155620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155694  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.155962  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156057  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.156114  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156123  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156316  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156327  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156469  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156583  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156619  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156826  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.156824  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.268001  141884 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:19.275696  141884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:19.426624  141884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:19.432842  141884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:19.432911  141884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:19.449082  141884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:19.449108  141884 start.go:495] detecting cgroup driver to use...
	I1212 01:03:19.449187  141884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:19.466543  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:19.482668  141884 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:19.482733  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:19.497124  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:19.512626  141884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:19.624948  141884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:19.779469  141884 docker.go:233] disabling docker service ...
	I1212 01:03:19.779545  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:19.794888  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:19.810497  141884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:19.954827  141884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:20.086435  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:20.100917  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:20.120623  141884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:20.120683  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.134353  141884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:20.134431  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.150373  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.165933  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.181524  141884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:20.196891  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.209752  141884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.228990  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.241553  141884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:20.251819  141884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:20.251883  141884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:20.267155  141884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:20.277683  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:20.427608  141884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:20.525699  141884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:20.525804  141884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:20.530984  141884 start.go:563] Will wait 60s for crictl version
	I1212 01:03:20.531055  141884 ssh_runner.go:195] Run: which crictl
	I1212 01:03:20.535013  141884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:20.576177  141884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:20.576251  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.605529  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.638175  141884 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:20.639475  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:20.642566  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643001  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:20.643034  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643196  141884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:20.647715  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:20.662215  141884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:20.662337  141884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:20.662381  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:20.705014  141884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:20.705112  141884 ssh_runner.go:195] Run: which lz4
	I1212 01:03:20.709477  141884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:20.714111  141884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:20.714145  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:19.666527  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:21.666676  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:24.165316  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:22.285157  141884 crio.go:462] duration metric: took 1.575709911s to copy over tarball
	I1212 01:03:22.285258  141884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:24.495814  141884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210502234s)
	I1212 01:03:24.495848  141884 crio.go:469] duration metric: took 2.210655432s to extract the tarball
	I1212 01:03:24.495857  141884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:24.533396  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:24.581392  141884 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:24.581419  141884 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:24.581428  141884 kubeadm.go:934] updating node { 192.168.39.174 8444 v1.31.2 crio true true} ...
	I1212 01:03:24.581524  141884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-076578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:24.581594  141884 ssh_runner.go:195] Run: crio config
	I1212 01:03:24.625042  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:24.625073  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:24.625083  141884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:24.625111  141884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-076578 NodeName:default-k8s-diff-port-076578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:24.625238  141884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-076578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:24.625313  141884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:24.635818  141884 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:24.635903  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:24.645966  141884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:03:24.665547  141884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:24.682639  141884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1212 01:03:24.700147  141884 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:24.704172  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:24.716697  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:24.842374  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:24.860641  141884 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578 for IP: 192.168.39.174
	I1212 01:03:24.860676  141884 certs.go:194] generating shared ca certs ...
	I1212 01:03:24.860700  141884 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:24.860888  141884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:24.860955  141884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:24.860970  141884 certs.go:256] generating profile certs ...
	I1212 01:03:24.861110  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.key
	I1212 01:03:24.861200  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key.4a68806a
	I1212 01:03:24.861251  141884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key
	I1212 01:03:24.861391  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:24.861444  141884 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:24.861458  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:24.861498  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:24.861535  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:24.861565  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:24.861629  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:24.862588  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:24.899764  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:24.950373  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:24.983222  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:25.017208  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 01:03:25.042653  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:25.071358  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:25.097200  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:25.122209  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:25.150544  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:25.181427  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:25.210857  141884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:25.229580  141884 ssh_runner.go:195] Run: openssl version
	I1212 01:03:25.236346  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:25.247510  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252355  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252407  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.258511  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:25.272698  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:25.289098  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295737  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295806  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.304133  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:25.315805  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:25.328327  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333482  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333539  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.339367  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:25.351612  141884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:25.357060  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:25.363452  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:25.369984  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:25.376434  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:25.382895  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:25.389199  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:25.395232  141884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:25.395325  141884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:25.395370  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.439669  141884 cri.go:89] found id: ""
	I1212 01:03:25.439749  141884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:25.453870  141884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:25.453893  141884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:25.453951  141884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:25.464552  141884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:25.465609  141884 kubeconfig.go:125] found "default-k8s-diff-port-076578" server: "https://192.168.39.174:8444"
	I1212 01:03:25.467767  141884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:25.477907  141884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I1212 01:03:25.477943  141884 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:25.477958  141884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:25.478018  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.521891  141884 cri.go:89] found id: ""
	I1212 01:03:25.521978  141884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:25.539029  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:25.549261  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:25.549283  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:25.549341  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:03:25.558948  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:25.559022  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:25.568947  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:03:25.579509  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:25.579614  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:25.589573  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.600434  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:25.600498  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.610337  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:03:25.619956  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:25.620014  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:25.631231  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:25.641366  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:25.761159  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:26.165525  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:28.168457  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.168492  141469 pod_ready.go:82] duration metric: took 10.510517291s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.168506  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175334  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.175361  141469 pod_ready.go:82] duration metric: took 6.84531ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175375  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183060  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.183093  141469 pod_ready.go:82] duration metric: took 7.709158ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183106  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.190999  141469 pod_ready.go:93] pod "kube-proxy-9f6lj" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.191028  141469 pod_ready.go:82] duration metric: took 7.913069ms for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.191040  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199945  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.199972  141469 pod_ready.go:82] duration metric: took 8.923682ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199984  141469 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:26.830271  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069070962s)
	I1212 01:03:26.830326  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.035935  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.113317  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.210226  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:27.210329  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:27.710504  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.211114  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.242967  141884 api_server.go:72] duration metric: took 1.032736901s to wait for apiserver process to appear ...
	I1212 01:03:28.243012  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:28.243038  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:28.243643  141884 api_server.go:269] stopped: https://192.168.39.174:8444/healthz: Get "https://192.168.39.174:8444/healthz": dial tcp 192.168.39.174:8444: connect: connection refused
	I1212 01:03:28.743921  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.546075  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.546113  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.546129  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.621583  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.621619  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.743860  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.750006  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:31.750052  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.243382  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.269990  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.270033  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.743516  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.752979  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.753012  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:33.243571  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:33.247902  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:03:33.253786  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:33.253810  141884 api_server.go:131] duration metric: took 5.010790107s to wait for apiserver health ...
	I1212 01:03:33.253820  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:33.253826  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:33.255762  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:30.208396  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:32.708024  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:33.257007  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:33.267934  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:33.286197  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:33.297934  141884 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:33.297982  141884 system_pods.go:61] "coredns-7c65d6cfc9-xn886" [db1f42f1-93d9-4942-813d-e3de1cc24801] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:33.297995  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [25555578-8169-4986-aa10-06a442152c50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:33.298006  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [1004c64c-91ca-43c3-9c3d-43dab13d3812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:33.298023  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [63d42313-4ea9-44f9-a8eb-b0c6c73424c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:33.298039  141884 system_pods.go:61] "kube-proxy-7frgh" [191ed421-4297-47c7-a46d-407a8eaa0378] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:33.298049  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [1506a505-697c-4b80-b7ef-55de1116fa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:33.298060  141884 system_pods.go:61] "metrics-server-6867b74b74-k9s7n" [806badc0-b609-421f-9203-3fd91212a145] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:33.298077  141884 system_pods.go:61] "storage-provisioner" [bc133673-b7e2-42b2-98ac-e3284c9162ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:33.298090  141884 system_pods.go:74] duration metric: took 11.875762ms to wait for pod list to return data ...
	I1212 01:03:33.298105  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:33.302482  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:33.302517  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:33.302532  141884 node_conditions.go:105] duration metric: took 4.418219ms to run NodePressure ...
	I1212 01:03:33.302555  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:33.728028  141884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735780  141884 kubeadm.go:739] kubelet initialised
	I1212 01:03:33.735810  141884 kubeadm.go:740] duration metric: took 7.738781ms waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735824  141884 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:33.743413  141884 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:35.750012  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.348909  141411 start.go:364] duration metric: took 54.693436928s to acquireMachinesLock for "no-preload-242725"
	I1212 01:03:39.348976  141411 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:39.348990  141411 fix.go:54] fixHost starting: 
	I1212 01:03:39.349442  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:39.349485  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:39.367203  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1212 01:03:39.367584  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:39.368158  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:03:39.368185  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:39.368540  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:39.368717  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:39.368854  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:03:39.370433  141411 fix.go:112] recreateIfNeeded on no-preload-242725: state=Stopped err=<nil>
	I1212 01:03:39.370460  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	W1212 01:03:39.370594  141411 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:39.372621  141411 out.go:177] * Restarting existing kvm2 VM for "no-preload-242725" ...
	I1212 01:03:35.206417  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.208384  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:37.750453  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.752224  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.374093  141411 main.go:141] libmachine: (no-preload-242725) Calling .Start
	I1212 01:03:39.374303  141411 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 01:03:39.375021  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 01:03:39.375456  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 01:03:39.375951  141411 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 01:03:39.376726  141411 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 01:03:40.703754  141411 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 01:03:40.705274  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.705752  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.705821  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.705709  143226 retry.go:31] will retry after 196.576482ms: waiting for machine to come up
	I1212 01:03:40.904341  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.904718  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.904740  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.904669  143226 retry.go:31] will retry after 375.936901ms: waiting for machine to come up
	I1212 01:03:41.282278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.282839  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.282871  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.282793  143226 retry.go:31] will retry after 427.731576ms: waiting for machine to come up
	I1212 01:03:41.712553  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.713198  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.713231  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.713084  143226 retry.go:31] will retry after 421.07445ms: waiting for machine to come up
	I1212 01:03:39.707174  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:41.711103  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.207685  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:41.768335  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.252708  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.754333  141884 pod_ready.go:93] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.754362  141884 pod_ready.go:82] duration metric: took 11.010925207s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.754371  141884 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760121  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.760142  141884 pod_ready.go:82] duration metric: took 5.764171ms for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760151  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765554  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.765575  141884 pod_ready.go:82] duration metric: took 5.417017ms for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765589  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:42.135878  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.136341  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.136367  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.136284  143226 retry.go:31] will retry after 477.81881ms: waiting for machine to come up
	I1212 01:03:42.616400  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.616906  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.616929  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.616858  143226 retry.go:31] will retry after 597.608319ms: waiting for machine to come up
	I1212 01:03:43.215837  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:43.216430  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:43.216454  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:43.216363  143226 retry.go:31] will retry after 1.118837214s: waiting for machine to come up
	I1212 01:03:44.336666  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:44.337229  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:44.337253  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:44.337187  143226 retry.go:31] will retry after 1.008232952s: waiting for machine to come up
	I1212 01:03:45.346868  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:45.347386  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:45.347423  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:45.347307  143226 retry.go:31] will retry after 1.735263207s: waiting for machine to come up
	I1212 01:03:47.084570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:47.084980  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:47.085012  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:47.084931  143226 retry.go:31] will retry after 1.662677797s: waiting for machine to come up
	I1212 01:03:46.208324  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.707694  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:48.069316  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.069362  141884 pod_ready.go:82] duration metric: took 3.303763458s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.069380  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328758  141884 pod_ready.go:93] pod "kube-proxy-7frgh" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.328784  141884 pod_ready.go:82] duration metric: took 259.396178ms for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328798  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337082  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.337106  141884 pod_ready.go:82] duration metric: took 8.298777ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337119  141884 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:50.343458  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.748914  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:48.749510  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:48.749535  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:48.749475  143226 retry.go:31] will retry after 2.670904101s: waiting for machine to come up
	I1212 01:03:51.421499  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:51.421915  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:51.421961  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:51.421862  143226 retry.go:31] will retry after 3.566697123s: waiting for machine to come up
	I1212 01:03:50.708435  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:53.207675  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.344098  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.344166  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:56.345835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.990515  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:54.990916  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:54.990941  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:54.990869  143226 retry.go:31] will retry after 4.288131363s: waiting for machine to come up
	I1212 01:03:55.706167  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:57.707796  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.843944  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.844210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:59.284312  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.284807  141411 main.go:141] libmachine: (no-preload-242725) Found IP for machine: 192.168.61.222
	I1212 01:03:59.284834  141411 main.go:141] libmachine: (no-preload-242725) Reserving static IP address...
	I1212 01:03:59.284851  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has current primary IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.285300  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.285334  141411 main.go:141] libmachine: (no-preload-242725) DBG | skip adding static IP to network mk-no-preload-242725 - found existing host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"}
	I1212 01:03:59.285357  141411 main.go:141] libmachine: (no-preload-242725) Reserved static IP address: 192.168.61.222
	I1212 01:03:59.285376  141411 main.go:141] libmachine: (no-preload-242725) Waiting for SSH to be available...
	I1212 01:03:59.285390  141411 main.go:141] libmachine: (no-preload-242725) DBG | Getting to WaitForSSH function...
	I1212 01:03:59.287532  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287840  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.287869  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287970  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH client type: external
	I1212 01:03:59.287998  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa (-rw-------)
	I1212 01:03:59.288043  141411 main.go:141] libmachine: (no-preload-242725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:59.288066  141411 main.go:141] libmachine: (no-preload-242725) DBG | About to run SSH command:
	I1212 01:03:59.288092  141411 main.go:141] libmachine: (no-preload-242725) DBG | exit 0
	I1212 01:03:59.415723  141411 main.go:141] libmachine: (no-preload-242725) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:59.416104  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 01:03:59.416755  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.419446  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.419848  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.419879  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.420182  141411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 01:03:59.420388  141411 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:59.420412  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:59.420637  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.422922  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423257  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.423278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423432  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.423626  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423787  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423918  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.424051  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.424222  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.424231  141411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:59.536768  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:59.536796  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537016  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:03:59.537042  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537234  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.539806  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540110  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.540141  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540337  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.540509  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540665  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540800  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.540973  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.541155  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.541171  141411 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-242725 && echo "no-preload-242725" | sudo tee /etc/hostname
	I1212 01:03:59.668244  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-242725
	
	I1212 01:03:59.668269  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.671021  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671353  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.671374  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671630  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.671851  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672000  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672160  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.672310  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.672485  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.672502  141411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-242725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-242725/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-242725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:59.792950  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:59.792985  141411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:59.793011  141411 buildroot.go:174] setting up certificates
	I1212 01:03:59.793024  141411 provision.go:84] configureAuth start
	I1212 01:03:59.793041  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.793366  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.796185  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796599  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.796638  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796783  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.799165  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799532  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.799558  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799711  141411 provision.go:143] copyHostCerts
	I1212 01:03:59.799780  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:59.799804  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:59.799869  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:59.800004  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:59.800015  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:59.800051  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:59.800144  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:59.800155  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:59.800182  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:59.800263  141411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.no-preload-242725 san=[127.0.0.1 192.168.61.222 localhost minikube no-preload-242725]
	I1212 01:03:59.987182  141411 provision.go:177] copyRemoteCerts
	I1212 01:03:59.987249  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:59.987290  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.989902  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990285  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.990317  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990520  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.990712  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.990856  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.990981  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.078289  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:00.103149  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:04:00.131107  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:04:00.159076  141411 provision.go:87] duration metric: took 366.034024ms to configureAuth
	I1212 01:04:00.159103  141411 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:04:00.159305  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:04:00.159401  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.162140  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162537  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.162570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162696  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.162864  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163016  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163124  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.163262  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.163436  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.163451  141411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:00.407729  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:00.407758  141411 machine.go:96] duration metric: took 987.35601ms to provisionDockerMachine
	I1212 01:04:00.407773  141411 start.go:293] postStartSetup for "no-preload-242725" (driver="kvm2")
	I1212 01:04:00.407787  141411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:00.407810  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.408186  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:00.408218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.410950  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411329  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.411360  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411585  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.411809  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.411981  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.412115  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.498221  141411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:00.502621  141411 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:04:00.502644  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:04:00.502705  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:04:00.502779  141411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:04:00.502863  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:00.512322  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:00.540201  141411 start.go:296] duration metric: took 132.410555ms for postStartSetup
	I1212 01:04:00.540250  141411 fix.go:56] duration metric: took 21.191260423s for fixHost
	I1212 01:04:00.540287  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.542631  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.542983  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.543011  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.543212  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.543393  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543556  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543702  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.543867  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.544081  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.544095  141411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:04:00.656532  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965440.609922961
	
	I1212 01:04:00.656560  141411 fix.go:216] guest clock: 1733965440.609922961
	I1212 01:04:00.656569  141411 fix.go:229] Guest: 2024-12-12 01:04:00.609922961 +0000 UTC Remote: 2024-12-12 01:04:00.540255801 +0000 UTC m=+358.475944555 (delta=69.66716ms)
	I1212 01:04:00.656597  141411 fix.go:200] guest clock delta is within tolerance: 69.66716ms
	I1212 01:04:00.656616  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 21.307670093s
	I1212 01:04:00.656644  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.656898  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:00.659345  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659694  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.659722  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659878  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660405  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660584  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660663  141411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:00.660731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.660751  141411 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:00.660771  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.663331  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663458  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663717  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663757  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663789  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663802  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663867  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664039  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664044  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664201  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664202  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664359  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664359  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.664490  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.777379  141411 ssh_runner.go:195] Run: systemctl --version
	I1212 01:04:00.783765  141411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:04:00.933842  141411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:04:00.941376  141411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:04:00.941441  141411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:04:00.958993  141411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:04:00.959021  141411 start.go:495] detecting cgroup driver to use...
	I1212 01:04:00.959084  141411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:04:00.977166  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:04:00.991166  141411 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:04:00.991231  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:04:01.004993  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:04:01.018654  141411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:04:01.136762  141411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:04:01.300915  141411 docker.go:233] disabling docker service ...
	I1212 01:04:01.301036  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:04:01.316124  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:04:01.329544  141411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:04:01.451034  141411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:04:01.583471  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:04:01.611914  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:04:01.632628  141411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:04:01.632706  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.644315  141411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:04:01.644384  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.656980  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.668295  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.679885  141411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:04:01.692032  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.703893  141411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.724486  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.737251  141411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:04:01.748955  141411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:04:01.749025  141411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:04:01.763688  141411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:04:01.773871  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:01.903690  141411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:04:02.006921  141411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:04:02.007013  141411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:04:02.013116  141411 start.go:563] Will wait 60s for crictl version
	I1212 01:04:02.013187  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.017116  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:04:02.061210  141411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:04:02.061304  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.093941  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.124110  141411 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:59.708028  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:01.709056  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:04.207527  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.845618  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.346194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:02.125647  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:02.128481  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.128914  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:02.128973  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.129205  141411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:04:02.133801  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:02.148892  141411 kubeadm.go:883] updating cluster {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:04:02.149001  141411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:04:02.149033  141411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:04:02.187762  141411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:04:02.187805  141411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:04:02.187934  141411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.187988  141411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.188025  141411 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.188070  141411 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.188118  141411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.188220  141411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.188332  141411 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 01:04:02.188501  141411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.189594  141411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.189674  141411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.189892  141411 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.190015  141411 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 01:04:02.190121  141411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.190152  141411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.190169  141411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.190746  141411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.372557  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.375185  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.389611  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.394581  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.396799  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.408346  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1212 01:04:02.413152  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.438165  141411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1212 01:04:02.438217  141411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.438272  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.518752  141411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1212 01:04:02.518804  141411 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.518856  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.556287  141411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1212 01:04:02.556329  141411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.556371  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569629  141411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1212 01:04:02.569671  141411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.569683  141411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1212 01:04:02.569721  141411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.569731  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569770  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667454  141411 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1212 01:04:02.667511  141411 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.667510  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.667532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.667549  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667632  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.667644  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.667671  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.683807  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.784024  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.797709  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.797836  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.797848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.797969  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.822411  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.880580  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.927305  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.928532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.928661  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.938172  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.973083  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:03.023699  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 01:04:03.023813  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.069822  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 01:04:03.069879  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 01:04:03.069920  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 01:04:03.069945  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:03.069973  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:03.069990  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:03.070037  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 01:04:03.070116  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:03.094188  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1212 01:04:03.094210  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094229  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1212 01:04:03.094249  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094285  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1212 01:04:03.094313  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1212 01:04:03.094379  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1212 01:04:03.094399  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 01:04:03.094480  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:04.469173  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.174822  141411 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.080313699s)
	I1212 01:04:05.174869  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1212 01:04:05.174899  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.08062641s)
	I1212 01:04:05.174928  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1212 01:04:05.174968  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.174994  141411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 01:04:05.175034  141411 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.175086  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:05.175038  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.179340  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:06.207626  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:08.706815  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.843908  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:07.654693  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.479543185s)
	I1212 01:04:07.654721  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1212 01:04:07.654743  141411 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.654775  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.475408038s)
	I1212 01:04:07.654848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:07.654784  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.699286  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:09.647620  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.948278157s)
	I1212 01:04:09.647642  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.992718083s)
	I1212 01:04:09.647662  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1212 01:04:09.647683  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:04:09.647686  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647734  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647776  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:09.652886  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 01:04:11.112349  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.464585062s)
	I1212 01:04:11.112384  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1212 01:04:11.112412  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.112462  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.206933  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.208623  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.844442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:14.845189  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.083753  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.971262547s)
	I1212 01:04:13.083788  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1212 01:04:13.083821  141411 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:13.083878  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:17.087777  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.003870257s)
	I1212 01:04:17.087818  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1212 01:04:17.087853  141411 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:17.087917  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:15.707981  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:18.207205  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.345467  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:19.845255  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:17.734979  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:04:17.735041  141411 cache_images.go:123] Successfully loaded all cached images
	I1212 01:04:17.735049  141411 cache_images.go:92] duration metric: took 15.547226992s to LoadCachedImages
	I1212 01:04:17.735066  141411 kubeadm.go:934] updating node { 192.168.61.222 8443 v1.31.2 crio true true} ...
	I1212 01:04:17.735209  141411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-242725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:04:17.735311  141411 ssh_runner.go:195] Run: crio config
	I1212 01:04:17.780826  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:17.780850  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:17.780859  141411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:04:17.780882  141411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.222 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-242725 NodeName:no-preload-242725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:04:17.781025  141411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-242725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:04:17.781091  141411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:04:17.792290  141411 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:04:17.792374  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:04:17.802686  141411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:04:17.819496  141411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:04:17.836164  141411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1212 01:04:17.855844  141411 ssh_runner.go:195] Run: grep 192.168.61.222	control-plane.minikube.internal$ /etc/hosts
	I1212 01:04:17.860034  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:17.874418  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:18.011357  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:04:18.028641  141411 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725 for IP: 192.168.61.222
	I1212 01:04:18.028666  141411 certs.go:194] generating shared ca certs ...
	I1212 01:04:18.028683  141411 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:04:18.028880  141411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:04:18.028940  141411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:04:18.028954  141411 certs.go:256] generating profile certs ...
	I1212 01:04:18.029088  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.key
	I1212 01:04:18.029164  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key.f2ca822e
	I1212 01:04:18.029235  141411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key
	I1212 01:04:18.029404  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:04:18.029438  141411 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:04:18.029449  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:04:18.029485  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:04:18.029517  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:04:18.029555  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:04:18.029621  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:18.030313  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:04:18.082776  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:04:18.116012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:04:18.147385  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:04:18.180861  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:04:18.225067  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:04:18.255999  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:04:18.280193  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:04:18.304830  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:04:18.329012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:04:18.355462  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:04:18.379991  141411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:04:18.397637  141411 ssh_runner.go:195] Run: openssl version
	I1212 01:04:18.403727  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:04:18.415261  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419809  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419885  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.425687  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:04:18.438938  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:04:18.452150  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457050  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457116  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.463151  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:04:18.476193  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:04:18.489034  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493916  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493969  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.500285  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:04:18.513016  141411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:04:18.517996  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:04:18.524465  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:04:18.530607  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:04:18.536857  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:04:18.542734  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:04:18.548786  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:04:18.554771  141411 kubeadm.go:392] StartCluster: {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:04:18.554897  141411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:04:18.554950  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.593038  141411 cri.go:89] found id: ""
	I1212 01:04:18.593131  141411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:04:18.604527  141411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:04:18.604550  141411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:04:18.604605  141411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:04:18.614764  141411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:04:18.616082  141411 kubeconfig.go:125] found "no-preload-242725" server: "https://192.168.61.222:8443"
	I1212 01:04:18.618611  141411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:04:18.628709  141411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.222
	I1212 01:04:18.628741  141411 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:04:18.628753  141411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:04:18.628814  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.673970  141411 cri.go:89] found id: ""
	I1212 01:04:18.674067  141411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:04:18.692603  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:04:18.704916  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:04:18.704940  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:04:18.704999  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:04:18.714952  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:04:18.715015  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:04:18.724982  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:04:18.734756  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:04:18.734817  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:04:18.744528  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.753898  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:04:18.753955  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.763929  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:04:18.773108  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:04:18.773153  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:04:18.782710  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:04:18.792750  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:18.902446  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.056638  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154145942s)
	I1212 01:04:20.056677  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.275475  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.348697  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.483317  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:04:20.483487  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.983704  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.484485  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.526353  141411 api_server.go:72] duration metric: took 1.043031812s to wait for apiserver process to appear ...
	I1212 01:04:21.526389  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:04:21.526415  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:20.207458  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:22.212936  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.362548  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.362574  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.362586  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.380904  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.380939  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.527174  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.533112  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:24.533146  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.026678  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.031368  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.031409  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.526576  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.532260  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.532297  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:26.026741  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:26.031841  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:04:26.038198  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:04:26.038228  141411 api_server.go:131] duration metric: took 4.511829936s to wait for apiserver health ...
	I1212 01:04:26.038240  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:26.038249  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:26.040150  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:04:22.343994  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:24.344818  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.346428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.041669  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:04:26.055010  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:04:26.076860  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:04:26.092122  141411 system_pods.go:59] 8 kube-system pods found
	I1212 01:04:26.092154  141411 system_pods.go:61] "coredns-7c65d6cfc9-7w9dc" [878bfb78-fae5-4e05-b0ae-362841eace85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:04:26.092163  141411 system_pods.go:61] "etcd-no-preload-242725" [ed97c029-7933-4f4e-ab6c-f514b963ce21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:04:26.092170  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [df66d12b-b847-4ef3-b610-5679ff50e8c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:04:26.092175  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [eb5bc914-4267-41e8-9b37-26b7d3da9f68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:04:26.092180  141411 system_pods.go:61] "kube-proxy-rjwps" [fccefb3e-a282-4f0e-9070-11cc95bca868] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:04:26.092185  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [139de4ad-468c-4f1b-becf-3708bcaa7c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:04:26.092190  141411 system_pods.go:61] "metrics-server-6867b74b74-xzkbn" [16e0364c-18f9-43c2-9394-bc8548ce9caa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:04:26.092194  141411 system_pods.go:61] "storage-provisioner" [06c3232e-011a-4aff-b3ca-81858355bef4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:04:26.092200  141411 system_pods.go:74] duration metric: took 15.315757ms to wait for pod list to return data ...
	I1212 01:04:26.092208  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:04:26.095691  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:04:26.095715  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:04:26.095725  141411 node_conditions.go:105] duration metric: took 3.513466ms to run NodePressure ...
	I1212 01:04:26.095742  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:26.389652  141411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398484  141411 kubeadm.go:739] kubelet initialised
	I1212 01:04:26.398513  141411 kubeadm.go:740] duration metric: took 8.824036ms waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398524  141411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:04:26.406667  141411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.416093  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416137  141411 pod_ready.go:82] duration metric: took 9.418311ms for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.416151  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416165  141411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.422922  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422951  141411 pod_ready.go:82] duration metric: took 6.774244ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.422962  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422971  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.429822  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429854  141411 pod_ready.go:82] duration metric: took 6.874602ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.429866  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429875  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.483542  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483578  141411 pod_ready.go:82] duration metric: took 53.690915ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.483609  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483622  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:24.707572  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:27.207073  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.843868  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.844684  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:28.081872  141411 pod_ready.go:93] pod "kube-proxy-rjwps" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:28.081901  141411 pod_ready.go:82] duration metric: took 1.598267411s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:28.081921  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:30.088965  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:32.099574  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:29.706557  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:31.706767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:33.706983  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.344074  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.345401  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:34.588690  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:34.588715  141411 pod_ready.go:82] duration metric: took 6.50678624s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:34.588727  141411 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:36.596475  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:36.207357  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:38.207516  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.844602  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.845115  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.095215  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:41.594487  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.708001  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:42.708477  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.344150  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.844336  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:43.595231  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:46.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.708857  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:47.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.207408  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.844848  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.344941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:48.595492  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.095830  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.706544  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.843857  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:54.345207  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.095934  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.598377  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.706720  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.707883  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:56.843562  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.843654  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:01.343428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.095640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.596162  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.207281  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:02.706000  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:03.344025  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.844236  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:03.095197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.096865  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:04.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.208202  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:08.343968  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:10.844912  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.595940  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.596206  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.094527  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.707469  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.206124  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.206285  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:13.343045  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.343706  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.096430  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.097196  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.207697  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:17.344863  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:19.348835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.595465  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:20.708087  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:22.708298  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:21.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:24.344442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:26.344638  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:23.095266  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.096246  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.097041  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.208225  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.706184  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:28.850925  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.344959  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.595032  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.595989  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.706696  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:32.206728  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.206974  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:33.844343  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.343847  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.096631  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.594963  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.707213  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:39.207473  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:38.842756  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.845163  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:38.595482  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.595779  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:41.707367  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:44.206693  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:43.343827  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.346793  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:43.095993  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.096437  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.206828  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:48.708067  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.843200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:49.844564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:47.595931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:50.095312  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.096092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:51.206349  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:53.206853  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.344518  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.344667  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.594738  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:56.595036  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:57.207827  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.845550  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.344473  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:58.595556  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:00.595723  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.706748  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.709131  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:01.842981  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.843341  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.843811  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:02.597066  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:04.597793  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:07.095874  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:06.206955  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:08.208809  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:07.844408  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.343200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:09.595982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:12.094513  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.708379  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:13.207302  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:12.343941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.843333  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.095296  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:16.596411  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:15.706990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.707150  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.344804  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.347642  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.096235  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.594712  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.707303  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.707482  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:24.208937  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:21.842975  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.845498  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.343574  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.596224  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.094625  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.707610  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:29.208425  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:28.843986  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.343502  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:28.096043  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.594875  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.707976  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:34.208061  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:33.843106  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.344148  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:33.095175  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.095369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:37.095797  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.707296  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.207875  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:38.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.343589  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.594883  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.594919  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.707035  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.707860  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:43.346470  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.845269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.595640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.596624  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.708204  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:48.206947  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:48.344831  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.345010  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:47.596708  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.094517  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:52.094931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.706903  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:53.207824  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:52.843114  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.845370  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.095381  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:56.097745  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:55.207916  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.708907  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.344917  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:59.844269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:58.595415  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:01.095794  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.208708  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:02.707173  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:02.342899  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:04.343072  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:03.596239  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.094842  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:05.206813  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:07.209422  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 01:07:08.842564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.843885  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:08.095189  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.096580  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:09.707537  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.205862  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.207175  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:12.844629  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:15.344452  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.594761  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.595360  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:17.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.707535  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.208767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:17.851516  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:20.343210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.596696  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.095982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:21.706245  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:23.707282  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:22.344482  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.844735  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.594999  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:26.595500  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:25.707950  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.200781  141469 pod_ready.go:82] duration metric: took 4m0.000776844s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:28.200837  141469 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:28.200866  141469 pod_ready.go:39] duration metric: took 4m15.556500045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:28.200916  141469 kubeadm.go:597] duration metric: took 4m22.571399912s to restartPrimaryControlPlane
	W1212 01:07:28.201043  141469 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:28.201086  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:27.343233  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:29.344194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.596410  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.096062  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.843547  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.843911  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:36.343719  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.595369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:35.600094  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:38.844539  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.844997  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:38.096180  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:43.343026  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.343118  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:42.595856  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.096338  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.343485  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:48.337317  141884 pod_ready.go:82] duration metric: took 4m0.000178627s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:48.337358  141884 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:48.337386  141884 pod_ready.go:39] duration metric: took 4m14.601527023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:48.337421  141884 kubeadm.go:597] duration metric: took 4m22.883520304s to restartPrimaryControlPlane
	W1212 01:07:48.337486  141884 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:48.337526  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:47.595092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:50.096774  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.514069  141469 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312952103s)
	I1212 01:07:54.514153  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:54.543613  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:54.555514  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:54.569001  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:54.569024  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:54.569082  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:54.583472  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:54.583553  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:54.598721  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:54.614369  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:54.614451  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:54.625630  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.643317  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:54.643398  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.652870  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:54.662703  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:54.662774  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:54.672601  141469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:54.722949  141469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:07:54.723064  141469 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:54.845332  141469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:54.845476  141469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:54.845623  141469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:07:54.855468  141469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:54.857558  141469 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:54.857689  141469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:54.857774  141469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:54.857890  141469 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:54.857960  141469 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:54.858038  141469 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:54.858109  141469 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:54.858214  141469 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:54.858296  141469 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:54.858396  141469 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:54.858503  141469 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:54.858557  141469 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:54.858643  141469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:55.129859  141469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:55.274235  141469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:07:55.401999  141469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:56.015091  141469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:56.123268  141469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:56.123820  141469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:56.126469  141469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:52.595027  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:57.096606  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:56.128185  141469 out.go:235]   - Booting up control plane ...
	I1212 01:07:56.128343  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:56.128478  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:56.128577  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:56.149476  141469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:56.156042  141469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:56.156129  141469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:56.292423  141469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:07:56.292567  141469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:07:56.794594  141469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.027526ms
	I1212 01:07:56.794711  141469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:07:59.594764  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:01.595663  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:02.299386  141469 kubeadm.go:310] [api-check] The API server is healthy after 5.503154599s
	I1212 01:08:02.311549  141469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:02.326944  141469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:02.354402  141469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:02.354661  141469 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-607268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:02.368168  141469 kubeadm.go:310] [bootstrap-token] Using token: 0eo07f.wy46ulxfywwd0uy8
	I1212 01:08:02.369433  141469 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:02.369569  141469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:02.381945  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:02.407880  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:02.419211  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:02.426470  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:02.437339  141469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:02.708518  141469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:03.143189  141469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:03.704395  141469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:03.705460  141469 kubeadm.go:310] 
	I1212 01:08:03.705557  141469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:03.705576  141469 kubeadm.go:310] 
	I1212 01:08:03.705646  141469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:03.705650  141469 kubeadm.go:310] 
	I1212 01:08:03.705672  141469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:03.705724  141469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:03.705768  141469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:03.705800  141469 kubeadm.go:310] 
	I1212 01:08:03.705906  141469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:03.705918  141469 kubeadm.go:310] 
	I1212 01:08:03.705976  141469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:03.705987  141469 kubeadm.go:310] 
	I1212 01:08:03.706073  141469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:03.706191  141469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:03.706286  141469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:03.706307  141469 kubeadm.go:310] 
	I1212 01:08:03.706438  141469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:03.706549  141469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:03.706556  141469 kubeadm.go:310] 
	I1212 01:08:03.706670  141469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.706833  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:03.706864  141469 kubeadm.go:310] 	--control-plane 
	I1212 01:08:03.706869  141469 kubeadm.go:310] 
	I1212 01:08:03.706951  141469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:03.706963  141469 kubeadm.go:310] 
	I1212 01:08:03.707035  141469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.707134  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:03.708092  141469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:03.708135  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:08:03.708146  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:03.709765  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:03.711315  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:03.724767  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:03.749770  141469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:03.749830  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:03.749896  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-607268 minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=embed-certs-607268 minikube.k8s.io/primary=true
	I1212 01:08:03.973050  141469 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:03.973436  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.094838  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:06.095216  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:04.473952  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.974222  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.473799  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.974261  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.473492  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.974288  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.474064  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.974218  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:08.081567  141469 kubeadm.go:1113] duration metric: took 4.331794716s to wait for elevateKubeSystemPrivileges
	I1212 01:08:08.081603  141469 kubeadm.go:394] duration metric: took 5m2.502707851s to StartCluster
	I1212 01:08:08.081629  141469 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.081722  141469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:08.083443  141469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.083783  141469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:08.083894  141469 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:08.084015  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:08.084027  141469 addons.go:69] Setting metrics-server=true in profile "embed-certs-607268"
	I1212 01:08:08.084045  141469 addons.go:234] Setting addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:08.084014  141469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-607268"
	I1212 01:08:08.084054  141469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-607268"
	I1212 01:08:08.084083  141469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-607268"
	I1212 01:08:08.084085  141469 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-607268"
	W1212 01:08:08.084130  141469 addons.go:243] addon storage-provisioner should already be in state true
	W1212 01:08:08.084057  141469 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084618  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084658  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084671  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084684  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084617  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084756  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.085205  141469 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:08.086529  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:08.104090  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I1212 01:08:08.104115  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I1212 01:08:08.104092  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1212 01:08:08.104662  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104701  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104785  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105323  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105329  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105337  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105382  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105696  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105718  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105700  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.106132  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106163  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.106364  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.106599  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106626  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.110390  141469 addons.go:234] Setting addon default-storageclass=true in "embed-certs-607268"
	W1212 01:08:08.110415  141469 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:08.110447  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.110811  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.110844  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.124380  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1212 01:08:08.124888  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.125447  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.125472  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.125764  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.125966  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.126885  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1212 01:08:08.127417  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.127718  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1212 01:08:08.127911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.127990  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128002  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.128161  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.128338  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.128541  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.128612  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128626  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.129037  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.129640  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.129678  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.129905  141469 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:08.131337  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:08.131367  141469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:08.131387  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.131816  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.133335  141469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:08.134372  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.134696  141469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.134714  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:08.134734  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.134851  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.134868  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.135026  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.135247  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.135405  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.135549  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.137253  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137705  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.137725  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137810  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.137911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.138065  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.138162  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.146888  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I1212 01:08:08.147344  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.147919  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.147937  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.148241  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.148418  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.150018  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.150282  141469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.150299  141469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:08.150318  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.152881  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153311  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.153327  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.153344  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153509  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.153634  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.153816  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.301991  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:08.323794  141469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338205  141469 node_ready.go:49] node "embed-certs-607268" has status "Ready":"True"
	I1212 01:08:08.338241  141469 node_ready.go:38] duration metric: took 14.401624ms for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338255  141469 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:08.355801  141469 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:08.406624  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:08.406648  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:08.409497  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.456893  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:08.456917  141469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:08.554996  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.558767  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.558793  141469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:08.614574  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.702483  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702513  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.702818  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.702883  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.702894  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.702904  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702912  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.703142  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.703186  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.703163  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.714426  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.714450  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.714840  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.714857  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.821732  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266688284s)
	I1212 01:08:09.821807  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.821824  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822160  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822185  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.822211  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.822225  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822487  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.822518  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822535  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842157  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.227536232s)
	I1212 01:08:09.842222  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842237  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.842627  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.842663  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.842672  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842679  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842687  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.843002  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.843013  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.843028  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.843046  141469 addons.go:475] Verifying addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:09.844532  141469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:08.098516  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:10.596197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:09.845721  141469 addons.go:510] duration metric: took 1.761839241s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:10.400164  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:12.862616  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:14.362448  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.362473  141469 pod_ready.go:82] duration metric: took 6.006632075s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.362486  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868198  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.868220  141469 pod_ready.go:82] duration metric: took 505.72656ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868231  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872557  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.872582  141469 pod_ready.go:82] duration metric: took 4.343797ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872599  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876837  141469 pod_ready.go:93] pod "kube-proxy-6hw4b" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.876858  141469 pod_ready.go:82] duration metric: took 4.251529ms for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876867  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881467  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.881487  141469 pod_ready.go:82] duration metric: took 4.612567ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881496  141469 pod_ready.go:39] duration metric: took 6.543228562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:14.881516  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:14.881571  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.898899  141469 api_server.go:72] duration metric: took 6.815070313s to wait for apiserver process to appear ...
	I1212 01:08:14.898942  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:14.898963  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:08:14.904555  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:08:14.905738  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:14.905762  141469 api_server.go:131] duration metric: took 6.812513ms to wait for apiserver health ...
	I1212 01:08:14.905771  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:14.964381  141469 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:14.964413  141469 system_pods.go:61] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:14.964418  141469 system_pods.go:61] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:14.964422  141469 system_pods.go:61] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:14.964426  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:14.964429  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:14.964432  141469 system_pods.go:61] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:14.964435  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:14.964441  141469 system_pods.go:61] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:14.964447  141469 system_pods.go:61] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:14.964460  141469 system_pods.go:74] duration metric: took 58.68072ms to wait for pod list to return data ...
	I1212 01:08:14.964476  141469 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:15.161106  141469 default_sa.go:45] found service account: "default"
	I1212 01:08:15.161137  141469 default_sa.go:55] duration metric: took 196.651344ms for default service account to be created ...
	I1212 01:08:15.161147  141469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:15.363429  141469 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:15.363457  141469 system_pods.go:89] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:15.363462  141469 system_pods.go:89] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:15.363466  141469 system_pods.go:89] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:15.363470  141469 system_pods.go:89] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:15.363473  141469 system_pods.go:89] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:15.363477  141469 system_pods.go:89] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:15.363480  141469 system_pods.go:89] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:15.363487  141469 system_pods.go:89] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:15.363492  141469 system_pods.go:89] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:15.363501  141469 system_pods.go:126] duration metric: took 202.347796ms to wait for k8s-apps to be running ...
	I1212 01:08:15.363508  141469 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:15.363553  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:15.378498  141469 system_svc.go:56] duration metric: took 14.977368ms WaitForService to wait for kubelet
	I1212 01:08:15.378527  141469 kubeadm.go:582] duration metric: took 7.294704666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:15.378545  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:15.561384  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:15.561408  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:15.561422  141469 node_conditions.go:105] duration metric: took 182.869791ms to run NodePressure ...
	I1212 01:08:15.561435  141469 start.go:241] waiting for startup goroutines ...
	I1212 01:08:15.561442  141469 start.go:246] waiting for cluster config update ...
	I1212 01:08:15.561453  141469 start.go:255] writing updated cluster config ...
	I1212 01:08:15.561693  141469 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:15.615106  141469 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:15.617073  141469 out.go:177] * Done! kubectl is now configured to use "embed-certs-607268" cluster and "default" namespace by default
	I1212 01:08:14.771660  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.434092304s)
	I1212 01:08:14.771750  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:14.802721  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:08:14.813349  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:08:14.826608  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:08:14.826637  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:08:14.826693  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:08:14.842985  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:08:14.843060  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:08:14.855326  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:08:14.872371  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:08:14.872449  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:08:14.883793  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.894245  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:08:14.894306  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.906163  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:08:14.915821  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:08:14.915867  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:08:14.926019  141884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:08:15.092424  141884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:13.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:15.096259  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:17.596953  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:20.095957  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:22.096970  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:23.562216  141884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:08:23.562302  141884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:08:23.562463  141884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:08:23.562655  141884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:08:23.562786  141884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:08:23.562870  141884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:08:23.564412  141884 out.go:235]   - Generating certificates and keys ...
	I1212 01:08:23.564519  141884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:08:23.564605  141884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:08:23.564718  141884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:08:23.564802  141884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:08:23.564879  141884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:08:23.564925  141884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:08:23.565011  141884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:08:23.565110  141884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:08:23.565230  141884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:08:23.565352  141884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:08:23.565393  141884 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:08:23.565439  141884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:08:23.565485  141884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:08:23.565537  141884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:08:23.565582  141884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:08:23.565636  141884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:08:23.565700  141884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:08:23.565786  141884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:08:23.565885  141884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:08:23.567104  141884 out.go:235]   - Booting up control plane ...
	I1212 01:08:23.567195  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:08:23.567267  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:08:23.567353  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:08:23.567472  141884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:08:23.567579  141884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:08:23.567662  141884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:08:23.567812  141884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:08:23.567953  141884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:08:23.568010  141884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001996966s
	I1212 01:08:23.568071  141884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:08:23.568125  141884 kubeadm.go:310] [api-check] The API server is healthy after 5.001946459s
	I1212 01:08:23.568266  141884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:23.568424  141884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:23.568510  141884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:23.568702  141884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-076578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:23.568789  141884 kubeadm.go:310] [bootstrap-token] Using token: 472xql.x3zqihc9l5oj308m
	I1212 01:08:23.570095  141884 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:23.570226  141884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:23.570353  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:23.570550  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:23.570719  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:23.570880  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:23.571006  141884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:23.571186  141884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:23.571245  141884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:23.571322  141884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:23.571333  141884 kubeadm.go:310] 
	I1212 01:08:23.571411  141884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:23.571421  141884 kubeadm.go:310] 
	I1212 01:08:23.571530  141884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:23.571551  141884 kubeadm.go:310] 
	I1212 01:08:23.571609  141884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:23.571711  141884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:23.571795  141884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:23.571808  141884 kubeadm.go:310] 
	I1212 01:08:23.571892  141884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:23.571907  141884 kubeadm.go:310] 
	I1212 01:08:23.571985  141884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:23.571992  141884 kubeadm.go:310] 
	I1212 01:08:23.572069  141884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:23.572184  141884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:23.572276  141884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:23.572286  141884 kubeadm.go:310] 
	I1212 01:08:23.572413  141884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:23.572516  141884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:23.572525  141884 kubeadm.go:310] 
	I1212 01:08:23.572656  141884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.572805  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:23.572847  141884 kubeadm.go:310] 	--control-plane 
	I1212 01:08:23.572856  141884 kubeadm.go:310] 
	I1212 01:08:23.572973  141884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:23.572991  141884 kubeadm.go:310] 
	I1212 01:08:23.573107  141884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.573248  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:23.573273  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:08:23.573283  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:23.574736  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:23.575866  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:23.590133  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:23.613644  141884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:23.613737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:23.613759  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-076578 minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=default-k8s-diff-port-076578 minikube.k8s.io/primary=true
	I1212 01:08:23.642646  141884 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:23.831478  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.331749  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.832158  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.331630  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.831737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:26.331787  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.597126  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:27.095607  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:26.831860  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.331748  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.448891  141884 kubeadm.go:1113] duration metric: took 3.835231667s to wait for elevateKubeSystemPrivileges
	I1212 01:08:27.448930  141884 kubeadm.go:394] duration metric: took 5m2.053707834s to StartCluster
	I1212 01:08:27.448957  141884 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.449060  141884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:27.450918  141884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.451183  141884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:27.451263  141884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:27.451385  141884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451409  141884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451417  141884 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:08:27.451413  141884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451449  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:27.451454  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451465  141884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-076578"
	I1212 01:08:27.451423  141884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451570  141884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451586  141884 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:27.451648  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451876  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451905  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451927  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.451942  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452055  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.452096  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452939  141884 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:27.454521  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:27.467512  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1212 01:08:27.467541  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1212 01:08:27.467581  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1212 01:08:27.468032  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468069  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468039  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468580  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468592  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468604  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468609  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468620  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468635  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468968  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.469191  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.469562  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469579  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469613  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.469623  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.472898  141884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.472925  141884 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:27.472956  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.473340  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.473389  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.485014  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I1212 01:08:27.485438  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.486058  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.486077  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.486629  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.486832  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.487060  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1212 01:08:27.487779  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.488503  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.488527  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.488910  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.489132  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.489304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.489892  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1212 01:08:27.490599  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.490758  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.491213  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.491236  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.491385  141884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:27.491606  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.492230  141884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:27.492375  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.492420  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.493368  141884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.493382  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:27.493397  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.493462  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:27.493468  141884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:27.493481  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.496807  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497273  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.497304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497474  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.497647  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.497691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497771  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.497922  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.498178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.498190  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.498288  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.498467  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.498634  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.498779  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.512025  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1212 01:08:27.512490  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.513168  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.513187  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.513474  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.513664  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.514930  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.515106  141884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.515119  141884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:27.515131  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.520051  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520084  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.520183  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520419  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.520574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.520737  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.520828  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.692448  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:27.712214  141884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724269  141884 node_ready.go:49] node "default-k8s-diff-port-076578" has status "Ready":"True"
	I1212 01:08:27.724301  141884 node_ready.go:38] duration metric: took 12.044784ms for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724313  141884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:27.729135  141884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:27.768566  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:27.768596  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:27.782958  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.797167  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:27.797190  141884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:27.828960  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:27.828983  141884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:27.871251  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.883614  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:28.198044  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198090  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198457  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198510  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198522  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.198532  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198544  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198817  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198815  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198844  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.277379  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.277405  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.277719  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.277741  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955418  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084128053s)
	I1212 01:08:28.955472  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955561  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071904294s)
	I1212 01:08:28.955624  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955646  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955856  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.955874  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955881  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955888  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.957731  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957740  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957748  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957761  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957802  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957814  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957823  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.957836  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.958072  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.958090  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.958100  141884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-076578"
	I1212 01:08:28.959879  141884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:28.961027  141884 addons.go:510] duration metric: took 1.509771178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:29.241061  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:29.241090  141884 pod_ready.go:82] duration metric: took 1.511925292s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:29.241106  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:31.247610  141884 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:29.095906  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:31.593942  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:33.246910  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.246933  141884 pod_ready.go:82] duration metric: took 4.005818542s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.246944  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753325  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.753350  141884 pod_ready.go:82] duration metric: took 506.39921ms for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753360  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758733  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.758759  141884 pod_ready.go:82] duration metric: took 5.391762ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758769  141884 pod_ready.go:39] duration metric: took 6.034446537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:33.758789  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:33.758854  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:33.774952  141884 api_server.go:72] duration metric: took 6.323732468s to wait for apiserver process to appear ...
	I1212 01:08:33.774976  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:33.774995  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:08:33.780463  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:08:33.781364  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:33.781387  141884 api_server.go:131] duration metric: took 6.404187ms to wait for apiserver health ...
	I1212 01:08:33.781396  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:33.786570  141884 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:33.786591  141884 system_pods.go:61] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.786596  141884 system_pods.go:61] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.786599  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.786603  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.786606  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.786610  141884 system_pods.go:61] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.786615  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.786623  141884 system_pods.go:61] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.786630  141884 system_pods.go:61] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.786643  141884 system_pods.go:74] duration metric: took 5.239236ms to wait for pod list to return data ...
	I1212 01:08:33.786655  141884 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:33.789776  141884 default_sa.go:45] found service account: "default"
	I1212 01:08:33.789794  141884 default_sa.go:55] duration metric: took 3.13371ms for default service account to be created ...
	I1212 01:08:33.789801  141884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:33.794118  141884 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:33.794139  141884 system_pods.go:89] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.794145  141884 system_pods.go:89] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.794149  141884 system_pods.go:89] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.794154  141884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.794157  141884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.794161  141884 system_pods.go:89] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.794165  141884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.794170  141884 system_pods.go:89] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.794177  141884 system_pods.go:89] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.794185  141884 system_pods.go:126] duration metric: took 4.378791ms to wait for k8s-apps to be running ...
	I1212 01:08:33.794194  141884 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:33.794233  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:33.809257  141884 system_svc.go:56] duration metric: took 15.051528ms WaitForService to wait for kubelet
	I1212 01:08:33.809290  141884 kubeadm.go:582] duration metric: took 6.358073584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:33.809323  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:33.813154  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:33.813174  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:33.813183  141884 node_conditions.go:105] duration metric: took 3.85493ms to run NodePressure ...
	I1212 01:08:33.813194  141884 start.go:241] waiting for startup goroutines ...
	I1212 01:08:33.813200  141884 start.go:246] waiting for cluster config update ...
	I1212 01:08:33.813210  141884 start.go:255] writing updated cluster config ...
	I1212 01:08:33.813474  141884 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:33.862511  141884 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:33.864367  141884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-076578" cluster and "default" namespace by default
	I1212 01:08:33.594621  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:34.589133  141411 pod_ready.go:82] duration metric: took 4m0.000384717s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	E1212 01:08:34.589166  141411 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:08:34.589184  141411 pod_ready.go:39] duration metric: took 4m8.190648334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:34.589214  141411 kubeadm.go:597] duration metric: took 4m15.984656847s to restartPrimaryControlPlane
	W1212 01:08:34.589299  141411 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:08:34.589327  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:00.919650  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.330292422s)
	I1212 01:09:00.919762  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:00.956649  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:09:00.976311  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:00.999339  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:00.999364  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:00.999413  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:01.013048  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:01.013112  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:01.027407  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:01.036801  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:01.036854  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:01.046865  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.056325  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:01.056390  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.066574  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:01.078080  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:01.078130  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:01.088810  141411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:01.249481  141411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:09.318633  141411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:09:09.318694  141411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:09:09.318789  141411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:09:09.318924  141411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:09:09.319074  141411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:09:09.319185  141411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:09:09.320615  141411 out.go:235]   - Generating certificates and keys ...
	I1212 01:09:09.320710  141411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:09:09.320803  141411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:09:09.320886  141411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:09:09.320957  141411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:09:09.321061  141411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:09:09.321118  141411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:09:09.321188  141411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:09:09.321249  141411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:09:09.321334  141411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:09:09.321442  141411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:09:09.321516  141411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:09:09.321611  141411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:09:09.321698  141411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:09:09.321775  141411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:09:09.321849  141411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:09:09.321924  141411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:09:09.321973  141411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:09:09.322099  141411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:09:09.322204  141411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:09:09.323661  141411 out.go:235]   - Booting up control plane ...
	I1212 01:09:09.323780  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:09:09.323864  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:09:09.323950  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:09:09.324082  141411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:09:09.324181  141411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:09:09.324255  141411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:09:09.324431  141411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:09:09.324571  141411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:09:09.324647  141411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.39943ms
	I1212 01:09:09.324730  141411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:09:09.324780  141411 kubeadm.go:310] [api-check] The API server is healthy after 5.001520724s
	I1212 01:09:09.324876  141411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:09:09.325036  141411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:09:09.325136  141411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:09:09.325337  141411 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-242725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:09:09.325401  141411 kubeadm.go:310] [bootstrap-token] Using token: k8uf20.0v0t2d7mhtmwxurz
	I1212 01:09:09.326715  141411 out.go:235]   - Configuring RBAC rules ...
	I1212 01:09:09.326840  141411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:09:09.326938  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:09:09.327149  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:09:09.327329  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:09:09.327498  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:09:09.327643  141411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:09:09.327787  141411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:09:09.327852  141411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:09:09.327926  141411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:09:09.327935  141411 kubeadm.go:310] 
	I1212 01:09:09.328027  141411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:09:09.328036  141411 kubeadm.go:310] 
	I1212 01:09:09.328138  141411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:09:09.328148  141411 kubeadm.go:310] 
	I1212 01:09:09.328183  141411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:09:09.328253  141411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:09:09.328302  141411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:09:09.328308  141411 kubeadm.go:310] 
	I1212 01:09:09.328396  141411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:09:09.328413  141411 kubeadm.go:310] 
	I1212 01:09:09.328478  141411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:09:09.328489  141411 kubeadm.go:310] 
	I1212 01:09:09.328554  141411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:09:09.328643  141411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:09:09.328719  141411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:09:09.328727  141411 kubeadm.go:310] 
	I1212 01:09:09.328797  141411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:09:09.328885  141411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:09:09.328894  141411 kubeadm.go:310] 
	I1212 01:09:09.328997  141411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329096  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:09:09.329120  141411 kubeadm.go:310] 	--control-plane 
	I1212 01:09:09.329126  141411 kubeadm.go:310] 
	I1212 01:09:09.329201  141411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:09:09.329209  141411 kubeadm.go:310] 
	I1212 01:09:09.329276  141411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329374  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:09:09.329386  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:09:09.329393  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:09:09.330870  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:09:09.332191  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:09:09.345593  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:09:09.366177  141411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:09:09.366234  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:09.366252  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-242725 minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=no-preload-242725 minikube.k8s.io/primary=true
	I1212 01:09:09.589709  141411 ops.go:34] apiserver oom_adj: -16
	I1212 01:09:09.589889  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.090703  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.590697  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.090698  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.590027  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.090413  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.590626  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.090322  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.590174  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.090032  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.233581  141411 kubeadm.go:1113] duration metric: took 4.867404479s to wait for elevateKubeSystemPrivileges
	I1212 01:09:14.233636  141411 kubeadm.go:394] duration metric: took 4m55.678870659s to StartCluster
	I1212 01:09:14.233674  141411 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.233790  141411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:09:14.236087  141411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.236385  141411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:09:14.236460  141411 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:09:14.236567  141411 addons.go:69] Setting storage-provisioner=true in profile "no-preload-242725"
	I1212 01:09:14.236583  141411 addons.go:69] Setting default-storageclass=true in profile "no-preload-242725"
	I1212 01:09:14.236610  141411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-242725"
	I1212 01:09:14.236611  141411 addons.go:69] Setting metrics-server=true in profile "no-preload-242725"
	I1212 01:09:14.236631  141411 addons.go:234] Setting addon metrics-server=true in "no-preload-242725"
	W1212 01:09:14.236646  141411 addons.go:243] addon metrics-server should already be in state true
	I1212 01:09:14.236682  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.236588  141411 addons.go:234] Setting addon storage-provisioner=true in "no-preload-242725"
	I1212 01:09:14.236687  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1212 01:09:14.236712  141411 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:09:14.236838  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.237093  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237141  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237185  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237101  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237227  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237235  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237863  141411 out.go:177] * Verifying Kubernetes components...
	I1212 01:09:14.239284  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:09:14.254182  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1212 01:09:14.254405  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I1212 01:09:14.254418  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 01:09:14.254742  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254857  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254874  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255388  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255415  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255439  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255803  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255814  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255807  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.256218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.256360  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256396  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.256524  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256567  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.259313  141411 addons.go:234] Setting addon default-storageclass=true in "no-preload-242725"
	W1212 01:09:14.259330  141411 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:09:14.259357  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.259575  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.259621  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.273148  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1212 01:09:14.273601  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.273909  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1212 01:09:14.274174  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274200  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274282  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.274560  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.274785  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274801  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274866  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.275126  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.275280  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.276840  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.277013  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.278945  141411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:09:14.279016  141411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.280219  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:09:14.280239  141411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:09:14.280268  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.280440  141411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.280450  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:09:14.280464  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.281368  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I1212 01:09:14.282054  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.282652  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.282673  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.283314  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.283947  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.283990  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.284230  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284232  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284802  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.284830  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285052  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285088  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.285106  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285247  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285458  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285483  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285619  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285624  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.285761  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285880  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.323872  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I1212 01:09:14.324336  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.324884  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.324906  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.325248  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.325437  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.326991  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.327217  141411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.327237  141411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:09:14.327258  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.330291  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.330895  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.330910  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.330926  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.331062  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.331219  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.331343  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.411182  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:09:14.454298  141411 node_ready.go:35] waiting up to 6m0s for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467328  141411 node_ready.go:49] node "no-preload-242725" has status "Ready":"True"
	I1212 01:09:14.467349  141411 node_ready.go:38] duration metric: took 13.017274ms for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467359  141411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:14.482865  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:14.557685  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.594366  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.602730  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:09:14.602760  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:09:14.666446  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:09:14.666474  141411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:09:14.746040  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.746075  141411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:09:14.799479  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.862653  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.862688  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863687  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.863706  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.863721  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.863730  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863740  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:14.863988  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.864007  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878604  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.878630  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.878903  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.878944  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878914  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.914665  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320255607s)
	I1212 01:09:15.914726  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.914741  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915158  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.915204  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915219  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:15.915236  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.915249  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915499  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915528  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.106582  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.307047373s)
	I1212 01:09:16.106635  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.106652  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107000  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107020  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107030  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.107037  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107298  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107317  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107328  141411 addons.go:475] Verifying addon metrics-server=true in "no-preload-242725"
	I1212 01:09:16.107305  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:16.108981  141411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:09:16.110608  141411 addons.go:510] duration metric: took 1.874161814s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:09:16.498983  141411 pod_ready.go:103] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:09:16.989762  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:16.989784  141411 pod_ready.go:82] duration metric: took 2.506893862s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:16.989795  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996560  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:17.996582  141411 pod_ready.go:82] duration metric: took 1.00678165s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996593  141411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002275  141411 pod_ready.go:93] pod "etcd-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.002294  141411 pod_ready.go:82] duration metric: took 5.694407ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002308  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006942  141411 pod_ready.go:93] pod "kube-apiserver-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.006965  141411 pod_ready.go:82] duration metric: took 4.650802ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006978  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011581  141411 pod_ready.go:93] pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.011621  141411 pod_ready.go:82] duration metric: took 4.634646ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011634  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187112  141411 pod_ready.go:93] pod "kube-proxy-5kc2s" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.187143  141411 pod_ready.go:82] duration metric: took 175.498685ms for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187156  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.586974  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.587003  141411 pod_ready.go:82] duration metric: took 399.836187ms for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.587012  141411 pod_ready.go:39] duration metric: took 4.119642837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:18.587032  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:09:18.587091  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:18.603406  141411 api_server.go:72] duration metric: took 4.366985373s to wait for apiserver process to appear ...
	I1212 01:09:18.603446  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:09:18.603473  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:09:18.609003  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:09:18.609950  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:09:18.609968  141411 api_server.go:131] duration metric: took 6.513408ms to wait for apiserver health ...
	I1212 01:09:18.609976  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:09:18.790460  141411 system_pods.go:59] 9 kube-system pods found
	I1212 01:09:18.790494  141411 system_pods.go:61] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:18.790502  141411 system_pods.go:61] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:18.790507  141411 system_pods.go:61] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:18.790510  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:18.790515  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:18.790520  141411 system_pods.go:61] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:18.790525  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:18.790534  141411 system_pods.go:61] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:18.790540  141411 system_pods.go:61] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:18.790556  141411 system_pods.go:74] duration metric: took 180.570066ms to wait for pod list to return data ...
	I1212 01:09:18.790566  141411 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:09:18.987130  141411 default_sa.go:45] found service account: "default"
	I1212 01:09:18.987172  141411 default_sa.go:55] duration metric: took 196.594497ms for default service account to be created ...
	I1212 01:09:18.987185  141411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:09:19.189233  141411 system_pods.go:86] 9 kube-system pods found
	I1212 01:09:19.189262  141411 system_pods.go:89] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:19.189267  141411 system_pods.go:89] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:19.189271  141411 system_pods.go:89] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:19.189274  141411 system_pods.go:89] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:19.189290  141411 system_pods.go:89] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:19.189294  141411 system_pods.go:89] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:19.189300  141411 system_pods.go:89] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:19.189308  141411 system_pods.go:89] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:19.189318  141411 system_pods.go:89] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:19.189331  141411 system_pods.go:126] duration metric: took 202.137957ms to wait for k8s-apps to be running ...
	I1212 01:09:19.189341  141411 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:09:19.189391  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:19.204241  141411 system_svc.go:56] duration metric: took 14.889522ms WaitForService to wait for kubelet
	I1212 01:09:19.204272  141411 kubeadm.go:582] duration metric: took 4.967858935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:09:19.204289  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:09:19.387735  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:09:19.387760  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:09:19.387768  141411 node_conditions.go:105] duration metric: took 183.47486ms to run NodePressure ...
	I1212 01:09:19.387780  141411 start.go:241] waiting for startup goroutines ...
	I1212 01:09:19.387787  141411 start.go:246] waiting for cluster config update ...
	I1212 01:09:19.387796  141411 start.go:255] writing updated cluster config ...
	I1212 01:09:19.388041  141411 ssh_runner.go:195] Run: rm -f paused
	I1212 01:09:19.437923  141411 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:09:19.439913  141411 out.go:177] * Done! kubectl is now configured to use "no-preload-242725" cluster and "default" namespace by default
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 
	
	
	==> CRI-O <==
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.022261549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733965914022241909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d6c6fdc-a613-414e-ae2e-cb43f22d4257 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.022718612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7836272-6a74-44b2-bf22-18d99aeb6546 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.022786777Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7836272-6a74-44b2-bf22-18d99aeb6546 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.022819141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e7836272-6a74-44b2-bf22-18d99aeb6546 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.057896248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80c01198-3de5-438b-8cee-c832fc20e6fa name=/runtime.v1.RuntimeService/Version
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.057985451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80c01198-3de5-438b-8cee-c832fc20e6fa name=/runtime.v1.RuntimeService/Version
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.059008941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19e36616-3e86-49fe-b2d0-0680f263354b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.059373802Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733965914059354082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19e36616-3e86-49fe-b2d0-0680f263354b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.059942795Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0835722-6541-4c42-9899-04fce9dc509f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.060017178Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0835722-6541-4c42-9899-04fce9dc509f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.060050723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d0835722-6541-4c42-9899-04fce9dc509f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.095302372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=597b1c77-636a-4283-b854-366f19846fe6 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.095409804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=597b1c77-636a-4283-b854-366f19846fe6 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.096518810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b77791b-2630-4d0f-af66-1f95b3bd9006 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.096892261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733965914096874715,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b77791b-2630-4d0f-af66-1f95b3bd9006 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.097420764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41418261-f03d-4863-9e60-ee7996c4d001 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.097556109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41418261-f03d-4863-9e60-ee7996c4d001 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.097592128Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=41418261-f03d-4863-9e60-ee7996c4d001 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.130196940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=30375dad-b194-4ac8-b26a-6820fca18e0f name=/runtime.v1.RuntimeService/Version
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.130314299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=30375dad-b194-4ac8-b26a-6820fca18e0f name=/runtime.v1.RuntimeService/Version
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.131788692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b27b70d-b52e-42e3-a221-0135f67b6cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.132227556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733965914132183790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b27b70d-b52e-42e3-a221-0135f67b6cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.133023383Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a815abf-720f-42ae-81f4-14795f64f92f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.133086875Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a815abf-720f-42ae-81f4-14795f64f92f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:11:54 old-k8s-version-738445 crio[636]: time="2024-12-12 01:11:54.133122650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6a815abf-720f-42ae-81f4-14795f64f92f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055186] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.154525] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.857593] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.677106] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.928690] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.061807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069660] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.204368] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.145806] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.275893] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +7.875714] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.056265] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.046586] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[Dec12 01:04] kauditd_printk_skb: 46 callbacks suppressed
	[Dec12 01:07] systemd-fstab-generator[5072]: Ignoring "noauto" option for root device
	[Dec12 01:09] systemd-fstab-generator[5350]: Ignoring "noauto" option for root device
	[  +0.066882] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:11:54 up 8 min,  0 users,  load average: 0.01, 0.15, 0.09
	Linux old-k8s-version-738445 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d9180, 0xc0001020c0)
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000be17f0, 0xc000d10d00)
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: goroutine 157 [select]:
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000be7770, 0x1, 0x0, 0x0, 0x0, 0x0)
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000d40ea0, 0x0, 0x0)
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008b1c00)
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 12 01:11:51 old-k8s-version-738445 kubelet[5532]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 12 01:11:51 old-k8s-version-738445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 12 01:11:51 old-k8s-version-738445 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 12 01:11:51 old-k8s-version-738445 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 12 01:11:52 old-k8s-version-738445 kubelet[5584]: I1212 01:11:52.087027    5584 server.go:416] Version: v1.20.0
	Dec 12 01:11:52 old-k8s-version-738445 kubelet[5584]: I1212 01:11:52.087641    5584 server.go:837] Client rotation is on, will bootstrap in background
	Dec 12 01:11:52 old-k8s-version-738445 kubelet[5584]: I1212 01:11:52.089613    5584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 12 01:11:52 old-k8s-version-738445 kubelet[5584]: I1212 01:11:52.091069    5584 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 12 01:11:52 old-k8s-version-738445 kubelet[5584]: W1212 01:11:52.091210    5584 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (240.700815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-738445" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (730.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-607268 -n embed-certs-607268
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-12 01:17:16.200872692 +0000 UTC m=+6234.292540295
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-607268 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-607268 logs -n 25: (2.098537753s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-000053 -- sudo                         | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-000053                                 | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:59:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:42.763823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:59:48.839858  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:51.911930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:57.991816  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:01.063931  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:07.143823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:10.215896  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:16.295837  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:19.367812  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:25.447920  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:28.519965  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:34.599875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:37.671930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:43.751927  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:46.823861  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:52.903942  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:55.975967  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:02.055889  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:05.127830  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:11.207862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:14.279940  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:20.359871  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:23.431885  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:29.511831  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:32.583875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:38.663880  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:41.735869  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:47.815810  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:50.887937  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:56.967886  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:00.039916  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:06.119870  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:09.191917  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:15.271841  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:18.343881  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:24.423844  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:27.495936  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:33.575851  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:36.647862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:39.652816  141469 start.go:364] duration metric: took 4m35.142362604s to acquireMachinesLock for "embed-certs-607268"
	I1212 01:02:39.652891  141469 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:39.652902  141469 fix.go:54] fixHost starting: 
	I1212 01:02:39.653292  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:39.653345  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:39.668953  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1212 01:02:39.669389  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:39.669880  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:02:39.669906  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:39.670267  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:39.670428  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:39.670550  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:02:39.671952  141469 fix.go:112] recreateIfNeeded on embed-certs-607268: state=Stopped err=<nil>
	I1212 01:02:39.671994  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	W1212 01:02:39.672154  141469 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:39.677119  141469 out.go:177] * Restarting existing kvm2 VM for "embed-certs-607268" ...
	I1212 01:02:39.650358  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:39.650413  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650700  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:02:39.650731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650949  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:02:39.652672  141411 machine.go:96] duration metric: took 4m37.426998938s to provisionDockerMachine
	I1212 01:02:39.652723  141411 fix.go:56] duration metric: took 4m37.447585389s for fixHost
	I1212 01:02:39.652731  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 4m37.447868317s
	W1212 01:02:39.652756  141411 start.go:714] error starting host: provision: host is not running
	W1212 01:02:39.652909  141411 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1212 01:02:39.652919  141411 start.go:729] Will try again in 5 seconds ...
	I1212 01:02:39.682230  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Start
	I1212 01:02:39.682424  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring networks are active...
	I1212 01:02:39.683293  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network default is active
	I1212 01:02:39.683713  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network mk-embed-certs-607268 is active
	I1212 01:02:39.684046  141469 main.go:141] libmachine: (embed-certs-607268) Getting domain xml...
	I1212 01:02:39.684631  141469 main.go:141] libmachine: (embed-certs-607268) Creating domain...
	I1212 01:02:40.886712  141469 main.go:141] libmachine: (embed-certs-607268) Waiting to get IP...
	I1212 01:02:40.887814  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:40.888208  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:40.888304  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:40.888203  142772 retry.go:31] will retry after 273.835058ms: waiting for machine to come up
	I1212 01:02:41.164102  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.164574  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.164604  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.164545  142772 retry.go:31] will retry after 260.789248ms: waiting for machine to come up
	I1212 01:02:41.427069  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.427486  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.427513  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.427449  142772 retry.go:31] will retry after 330.511025ms: waiting for machine to come up
	I1212 01:02:41.760038  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.760388  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.760434  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.760337  142772 retry.go:31] will retry after 564.656792ms: waiting for machine to come up
	I1212 01:02:42.327037  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.327545  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.327567  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.327505  142772 retry.go:31] will retry after 473.714754ms: waiting for machine to come up
	I1212 01:02:42.803228  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.803607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.803639  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.803548  142772 retry.go:31] will retry after 872.405168ms: waiting for machine to come up
	I1212 01:02:43.677522  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:43.677891  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:43.677919  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:43.677833  142772 retry.go:31] will retry after 1.092518369s: waiting for machine to come up
	I1212 01:02:44.655390  141411 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:02:44.771319  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:44.771721  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:44.771751  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:44.771666  142772 retry.go:31] will retry after 1.147907674s: waiting for machine to come up
	I1212 01:02:45.921165  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:45.921636  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:45.921666  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:45.921589  142772 retry.go:31] will retry after 1.69009103s: waiting for machine to come up
	I1212 01:02:47.614391  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:47.614838  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:47.614863  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:47.614792  142772 retry.go:31] will retry after 1.65610672s: waiting for machine to come up
	I1212 01:02:49.272864  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:49.273312  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:49.273337  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:49.273268  142772 retry.go:31] will retry after 2.50327667s: waiting for machine to come up
	I1212 01:02:51.779671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:51.780077  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:51.780104  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:51.780016  142772 retry.go:31] will retry after 2.808303717s: waiting for machine to come up
	I1212 01:02:54.591866  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:54.592241  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:54.592285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:54.592208  142772 retry.go:31] will retry after 3.689107313s: waiting for machine to come up
	I1212 01:02:58.282552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.282980  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has current primary IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.283005  141469 main.go:141] libmachine: (embed-certs-607268) Found IP for machine: 192.168.50.151
	I1212 01:02:58.283018  141469 main.go:141] libmachine: (embed-certs-607268) Reserving static IP address...
	I1212 01:02:58.283419  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.283441  141469 main.go:141] libmachine: (embed-certs-607268) Reserved static IP address: 192.168.50.151
	I1212 01:02:58.283453  141469 main.go:141] libmachine: (embed-certs-607268) DBG | skip adding static IP to network mk-embed-certs-607268 - found existing host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"}
	I1212 01:02:58.283462  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Getting to WaitForSSH function...
	I1212 01:02:58.283469  141469 main.go:141] libmachine: (embed-certs-607268) Waiting for SSH to be available...
	I1212 01:02:58.285792  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286126  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.286162  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286298  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH client type: external
	I1212 01:02:58.286330  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa (-rw-------)
	I1212 01:02:58.286378  141469 main.go:141] libmachine: (embed-certs-607268) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:02:58.286394  141469 main.go:141] libmachine: (embed-certs-607268) DBG | About to run SSH command:
	I1212 01:02:58.286403  141469 main.go:141] libmachine: (embed-certs-607268) DBG | exit 0
	I1212 01:02:58.407633  141469 main.go:141] libmachine: (embed-certs-607268) DBG | SSH cmd err, output: <nil>: 
	I1212 01:02:58.407985  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetConfigRaw
	I1212 01:02:58.408745  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.411287  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.411642  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411920  141469 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 01:02:58.412117  141469 machine.go:93] provisionDockerMachine start ...
	I1212 01:02:58.412136  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:58.412336  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.414338  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414643  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.414669  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414765  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.414944  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415114  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415259  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.415450  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.415712  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.415724  141469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:02:58.520032  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:02:58.520068  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520312  141469 buildroot.go:166] provisioning hostname "embed-certs-607268"
	I1212 01:02:58.520341  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520539  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.523169  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.523584  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523733  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.523910  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524092  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524252  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.524411  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.524573  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.524584  141469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-607268 && echo "embed-certs-607268" | sudo tee /etc/hostname
	I1212 01:02:58.642175  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-607268
	
	I1212 01:02:58.642232  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.645114  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645480  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.645505  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645698  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.645909  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646063  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646192  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.646334  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.646513  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.646530  141469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-607268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-607268/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-607268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:02:58.758655  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:58.758696  141469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:02:58.758715  141469 buildroot.go:174] setting up certificates
	I1212 01:02:58.758726  141469 provision.go:84] configureAuth start
	I1212 01:02:58.758735  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.759031  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.761749  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762024  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.762052  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762165  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.764356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.764699  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764781  141469 provision.go:143] copyHostCerts
	I1212 01:02:58.764874  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:02:58.764898  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:02:58.764986  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:02:58.765107  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:02:58.765118  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:02:58.765160  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:02:58.765235  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:02:58.765245  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:02:58.765296  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:02:58.765369  141469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-607268 san=[127.0.0.1 192.168.50.151 embed-certs-607268 localhost minikube]
	I1212 01:02:58.890412  141469 provision.go:177] copyRemoteCerts
	I1212 01:02:58.890519  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:02:58.890560  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.892973  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893262  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.893291  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893471  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.893647  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.893761  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.893855  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:58.973652  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:02:58.998097  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:02:59.022028  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:02:59.045859  141469 provision.go:87] duration metric: took 287.094036ms to configureAuth
	I1212 01:02:59.045892  141469 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:02:59.046119  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:02:59.046242  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.048869  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049255  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.049285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049465  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.049642  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049764  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049864  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.049974  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.050181  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.050198  141469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:02:59.276670  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:02:59.276708  141469 machine.go:96] duration metric: took 864.577145ms to provisionDockerMachine
	I1212 01:02:59.276724  141469 start.go:293] postStartSetup for "embed-certs-607268" (driver="kvm2")
	I1212 01:02:59.276747  141469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:02:59.276780  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.277171  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:02:59.277207  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.279974  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280341  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.280387  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280529  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.280738  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.280897  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.281026  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.363091  141469 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:02:59.367476  141469 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:02:59.367503  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:02:59.367618  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:02:59.367749  141469 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:02:59.367844  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:02:59.377895  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:02:59.402410  141469 start.go:296] duration metric: took 125.668908ms for postStartSetup
	I1212 01:02:59.402462  141469 fix.go:56] duration metric: took 19.749561015s for fixHost
	I1212 01:02:59.402485  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.405057  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.405385  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405624  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.405808  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.405974  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.406094  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.406237  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.406423  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.406436  141469 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:02:59.516697  141884 start.go:364] duration metric: took 3m42.810720852s to acquireMachinesLock for "default-k8s-diff-port-076578"
	I1212 01:02:59.516759  141884 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:59.516773  141884 fix.go:54] fixHost starting: 
	I1212 01:02:59.517192  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:59.517241  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:59.533969  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 01:02:59.534367  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:59.534831  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:02:59.534854  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:59.535165  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:59.535362  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:02:59.535499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:02:59.536930  141884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-076578: state=Stopped err=<nil>
	I1212 01:02:59.536951  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	W1212 01:02:59.537093  141884 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:59.538974  141884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-076578" ...
	I1212 01:02:59.516496  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965379.489556963
	
	I1212 01:02:59.516525  141469 fix.go:216] guest clock: 1733965379.489556963
	I1212 01:02:59.516535  141469 fix.go:229] Guest: 2024-12-12 01:02:59.489556963 +0000 UTC Remote: 2024-12-12 01:02:59.40246635 +0000 UTC m=+295.033602018 (delta=87.090613ms)
	I1212 01:02:59.516574  141469 fix.go:200] guest clock delta is within tolerance: 87.090613ms
	I1212 01:02:59.516580  141469 start.go:83] releasing machines lock for "embed-certs-607268", held for 19.863720249s
	I1212 01:02:59.516605  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.516828  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:59.519731  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520075  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.520111  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520202  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520752  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520921  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.521064  141469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:02:59.521131  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.521155  141469 ssh_runner.go:195] Run: cat /version.json
	I1212 01:02:59.521171  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.523724  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.523971  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524036  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524063  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524221  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524374  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524375  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524401  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524553  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.524562  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524719  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524719  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.524837  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.525000  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.628168  141469 ssh_runner.go:195] Run: systemctl --version
	I1212 01:02:59.635800  141469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:02:59.788137  141469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:02:59.795216  141469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:02:59.795289  141469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:02:59.811889  141469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:02:59.811917  141469 start.go:495] detecting cgroup driver to use...
	I1212 01:02:59.811992  141469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:02:59.827154  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:02:59.841138  141469 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:02:59.841193  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:02:59.854874  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:02:59.869250  141469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:02:59.994723  141469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:00.136385  141469 docker.go:233] disabling docker service ...
	I1212 01:03:00.136462  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:00.150966  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:00.163907  141469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:00.340171  141469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:00.480828  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:00.498056  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:00.518273  141469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:00.518339  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.529504  141469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:00.529571  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.540616  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.553419  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.566004  141469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:00.577682  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.589329  141469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.612561  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.625526  141469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:00.635229  141469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:00.635289  141469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:00.657569  141469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:00.669982  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:00.793307  141469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:00.887423  141469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:00.887498  141469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:00.892715  141469 start.go:563] Will wait 60s for crictl version
	I1212 01:03:00.892773  141469 ssh_runner.go:195] Run: which crictl
	I1212 01:03:00.896646  141469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:00.933507  141469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:00.933606  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:00.977011  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:01.008491  141469 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:02:59.540301  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Start
	I1212 01:02:59.540482  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring networks are active...
	I1212 01:02:59.541181  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network default is active
	I1212 01:02:59.541503  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network mk-default-k8s-diff-port-076578 is active
	I1212 01:02:59.541802  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Getting domain xml...
	I1212 01:02:59.542437  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Creating domain...
	I1212 01:03:00.796803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting to get IP...
	I1212 01:03:00.797932  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798386  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798495  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.798404  142917 retry.go:31] will retry after 199.022306ms: waiting for machine to come up
	I1212 01:03:00.999067  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999547  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999572  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.999499  142917 retry.go:31] will retry after 340.093067ms: waiting for machine to come up
	I1212 01:03:01.340839  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341513  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.341437  142917 retry.go:31] will retry after 469.781704ms: waiting for machine to come up
	I1212 01:03:01.009956  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:03:01.012767  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013224  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:03:01.013252  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013471  141469 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:01.017815  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:01.032520  141469 kubeadm.go:883] updating cluster {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:01.032662  141469 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:01.032715  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:01.070406  141469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:01.070478  141469 ssh_runner.go:195] Run: which lz4
	I1212 01:03:01.074840  141469 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:01.079207  141469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:01.079238  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:02.524822  141469 crio.go:462] duration metric: took 1.450020274s to copy over tarball
	I1212 01:03:02.524909  141469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:01.812803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813298  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813335  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.813232  142917 retry.go:31] will retry after 552.327376ms: waiting for machine to come up
	I1212 01:03:02.367682  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368152  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368187  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:02.368106  142917 retry.go:31] will retry after 629.731283ms: waiting for machine to come up
	I1212 01:03:02.999887  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000307  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000339  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.000235  142917 retry.go:31] will retry after 764.700679ms: waiting for machine to come up
	I1212 01:03:03.766389  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766891  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766919  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.766845  142917 retry.go:31] will retry after 920.806371ms: waiting for machine to come up
	I1212 01:03:04.689480  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690029  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:04.689996  142917 retry.go:31] will retry after 1.194297967s: waiting for machine to come up
	I1212 01:03:05.886345  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886729  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886796  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:05.886714  142917 retry.go:31] will retry after 1.60985804s: waiting for machine to come up
	I1212 01:03:04.719665  141469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194717299s)
	I1212 01:03:04.719708  141469 crio.go:469] duration metric: took 2.194851225s to extract the tarball
	I1212 01:03:04.719719  141469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:04.756600  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:04.802801  141469 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:04.802832  141469 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:04.802840  141469 kubeadm.go:934] updating node { 192.168.50.151 8443 v1.31.2 crio true true} ...
	I1212 01:03:04.802949  141469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-607268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:04.803008  141469 ssh_runner.go:195] Run: crio config
	I1212 01:03:04.854778  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:04.854804  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:04.854815  141469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:04.854836  141469 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.151 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-607268 NodeName:embed-certs-607268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:04.854962  141469 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-607268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:04.855023  141469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:04.864877  141469 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:04.864967  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:04.874503  141469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1212 01:03:04.891124  141469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:04.907560  141469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1212 01:03:04.924434  141469 ssh_runner.go:195] Run: grep 192.168.50.151	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:04.928518  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:04.940523  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:05.076750  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:05.094388  141469 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268 for IP: 192.168.50.151
	I1212 01:03:05.094424  141469 certs.go:194] generating shared ca certs ...
	I1212 01:03:05.094440  141469 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:05.094618  141469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:05.094691  141469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:05.094710  141469 certs.go:256] generating profile certs ...
	I1212 01:03:05.094833  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/client.key
	I1212 01:03:05.094916  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key.9253237b
	I1212 01:03:05.094968  141469 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key
	I1212 01:03:05.095131  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:05.095177  141469 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:05.095192  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:05.095224  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:05.095254  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:05.095293  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:05.095359  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:05.096133  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:05.130605  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:05.164694  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:05.206597  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:05.241305  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:03:05.270288  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:05.296137  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:05.320158  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:05.343820  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:05.373277  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:05.397070  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:05.420738  141469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:05.437822  141469 ssh_runner.go:195] Run: openssl version
	I1212 01:03:05.443744  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:05.454523  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459182  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459237  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.465098  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:05.475681  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:05.486396  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490883  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490929  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.496613  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:05.507295  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:05.517980  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522534  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522590  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.528117  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:05.538979  141469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:05.543723  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:05.549611  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:05.555445  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:05.561482  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:05.567221  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:05.573015  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:05.578902  141469 kubeadm.go:392] StartCluster: {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:05.578984  141469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:05.579042  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.619027  141469 cri.go:89] found id: ""
	I1212 01:03:05.619115  141469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:05.629472  141469 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:05.629501  141469 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:05.629567  141469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:05.639516  141469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:05.640491  141469 kubeconfig.go:125] found "embed-certs-607268" server: "https://192.168.50.151:8443"
	I1212 01:03:05.642468  141469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:05.651891  141469 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.151
	I1212 01:03:05.651922  141469 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:05.651934  141469 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:05.651978  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.686414  141469 cri.go:89] found id: ""
	I1212 01:03:05.686501  141469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:05.702724  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:05.712454  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:05.712480  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:05.712531  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:05.721529  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:05.721603  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:05.730897  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:05.739758  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:05.739815  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:05.749089  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.758042  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:05.758104  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.767425  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:05.776195  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:05.776260  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:05.785435  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:05.794795  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:05.918710  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:06.846975  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.072898  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.139677  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.237216  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:07.237336  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:07.738145  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.238219  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.255671  141469 api_server.go:72] duration metric: took 1.018455783s to wait for apiserver process to appear ...
	I1212 01:03:08.255705  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:08.255734  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:08.256408  141469 api_server.go:269] stopped: https://192.168.50.151:8443/healthz: Get "https://192.168.50.151:8443/healthz": dial tcp 192.168.50.151:8443: connect: connection refused
	I1212 01:03:08.756070  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:07.498527  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498942  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498966  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:07.498889  142917 retry.go:31] will retry after 2.278929136s: waiting for machine to come up
	I1212 01:03:09.779321  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779847  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779879  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:09.779793  142917 retry.go:31] will retry after 1.82028305s: waiting for machine to come up
	I1212 01:03:10.630080  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.630121  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.630140  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.674408  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.674470  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.756660  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.763043  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:10.763088  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.256254  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.263457  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.263481  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.756759  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.764019  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.764053  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:12.256627  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:12.262369  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:03:12.270119  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:12.270153  141469 api_server.go:131] duration metric: took 4.014438706s to wait for apiserver health ...
	I1212 01:03:12.270164  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:12.270172  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:12.272148  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:12.273667  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:12.289376  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:12.312620  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:12.323663  141469 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:12.323715  141469 system_pods.go:61] "coredns-7c65d6cfc9-n66x6" [ae2c1ac7-0c17-453d-a05c-70fbf6d81e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:12.323727  141469 system_pods.go:61] "etcd-embed-certs-607268" [811dc3d0-d893-45a0-a5c7-3fee0efd2e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:12.323747  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [11848f2c-215b-4cf4-88f0-93151c55e7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:12.323764  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [4f4066ab-b6e4-4a46-a03b-dda1776c39ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:12.323776  141469 system_pods.go:61] "kube-proxy-9f6lj" [2463030a-d7db-4031-9e26-0a56a9067520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:12.323784  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [c2aeaf4a-7fb8-4bb8-87ea-5401db017fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:12.323795  141469 system_pods.go:61] "metrics-server-6867b74b74-5bms9" [e1a794f9-cf60-486f-a0e8-670dc7dfb4da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:12.323803  141469 system_pods.go:61] "storage-provisioner" [b29860cd-465d-4e70-ad5d-dd17c22ae290] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:12.323820  141469 system_pods.go:74] duration metric: took 11.170811ms to wait for pod list to return data ...
	I1212 01:03:12.323845  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:12.327828  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:12.327863  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:12.327880  141469 node_conditions.go:105] duration metric: took 4.029256ms to run NodePressure ...
	I1212 01:03:12.327902  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:12.638709  141469 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644309  141469 kubeadm.go:739] kubelet initialised
	I1212 01:03:12.644332  141469 kubeadm.go:740] duration metric: took 5.590168ms waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644356  141469 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:12.650768  141469 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:11.601456  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602012  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602044  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:11.601956  142917 retry.go:31] will retry after 2.272258384s: waiting for machine to come up
	I1212 01:03:13.876607  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.876986  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.877024  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:13.876950  142917 retry.go:31] will retry after 4.014936005s: waiting for machine to come up
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:14.657097  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:16.658207  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:17.657933  141469 pod_ready.go:93] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:17.657957  141469 pod_ready.go:82] duration metric: took 5.007165494s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:17.657966  141469 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:17.896127  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896639  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Found IP for machine: 192.168.39.174
	I1212 01:03:17.896659  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserving static IP address...
	I1212 01:03:17.897028  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.897062  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserved static IP address: 192.168.39.174
	I1212 01:03:17.897087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | skip adding static IP to network mk-default-k8s-diff-port-076578 - found existing host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"}
	I1212 01:03:17.897108  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Getting to WaitForSSH function...
	I1212 01:03:17.897126  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for SSH to be available...
	I1212 01:03:17.899355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899727  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.899754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899911  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH client type: external
	I1212 01:03:17.899941  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa (-rw-------)
	I1212 01:03:17.899976  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:17.899989  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | About to run SSH command:
	I1212 01:03:17.900005  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | exit 0
	I1212 01:03:18.036261  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:18.036610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetConfigRaw
	I1212 01:03:18.037352  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.040173  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040570  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.040595  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040866  141884 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 01:03:18.041107  141884 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:18.041134  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.041355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.043609  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.043945  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.043973  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.044142  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.044291  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044466  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.044745  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.044986  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.045002  141884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:18.156161  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:18.156193  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156472  141884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-076578"
	I1212 01:03:18.156499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.159391  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.159871  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.159903  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.160048  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.160244  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160379  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160500  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.160681  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.160898  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.160917  141884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-076578 && echo "default-k8s-diff-port-076578" | sudo tee /etc/hostname
	I1212 01:03:18.285904  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-076578
	
	I1212 01:03:18.285937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.288620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.288987  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.289010  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.289285  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.289491  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289658  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289799  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.289981  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.290190  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.290223  141884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-076578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-076578/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-076578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:18.409683  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:18.409721  141884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:18.409751  141884 buildroot.go:174] setting up certificates
	I1212 01:03:18.409761  141884 provision.go:84] configureAuth start
	I1212 01:03:18.409782  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.410045  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.412393  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412721  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.412756  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.415204  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415502  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.415530  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415663  141884 provision.go:143] copyHostCerts
	I1212 01:03:18.415735  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:18.415757  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:18.415832  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:18.415925  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:18.415933  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:18.415952  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:18.416007  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:18.416015  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:18.416032  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:18.416081  141884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-076578 san=[127.0.0.1 192.168.39.174 default-k8s-diff-port-076578 localhost minikube]
	I1212 01:03:18.502493  141884 provision.go:177] copyRemoteCerts
	I1212 01:03:18.502562  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:18.502594  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.505104  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505377  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.505409  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505568  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.505754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.505892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.506034  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.590425  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:18.616850  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 01:03:18.640168  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:18.664517  141884 provision.go:87] duration metric: took 254.738256ms to configureAuth
	I1212 01:03:18.664542  141884 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:18.664705  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:03:18.664778  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.667425  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.667784  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.667808  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.668004  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.668178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668313  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668448  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.668587  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.668751  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.668772  141884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:18.906880  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:18.906908  141884 machine.go:96] duration metric: took 865.784426ms to provisionDockerMachine
	I1212 01:03:18.906920  141884 start.go:293] postStartSetup for "default-k8s-diff-port-076578" (driver="kvm2")
	I1212 01:03:18.906931  141884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:18.906949  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.907315  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:18.907348  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.909882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910213  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.910242  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910347  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.910542  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.910680  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.910806  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.994819  141884 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:18.998959  141884 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:18.998989  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:18.999069  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:18.999163  141884 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:18.999252  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:19.009226  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:19.032912  141884 start.go:296] duration metric: took 125.973128ms for postStartSetup
	I1212 01:03:19.032960  141884 fix.go:56] duration metric: took 19.516187722s for fixHost
	I1212 01:03:19.032990  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.035623  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.035947  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.035977  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.036151  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.036310  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036438  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036581  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.036738  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:19.036906  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:19.036919  141884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:19.148565  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965399.101726035
	
	I1212 01:03:19.148592  141884 fix.go:216] guest clock: 1733965399.101726035
	I1212 01:03:19.148602  141884 fix.go:229] Guest: 2024-12-12 01:03:19.101726035 +0000 UTC Remote: 2024-12-12 01:03:19.032967067 +0000 UTC m=+242.472137495 (delta=68.758968ms)
	I1212 01:03:19.148628  141884 fix.go:200] guest clock delta is within tolerance: 68.758968ms
	I1212 01:03:19.148635  141884 start.go:83] releasing machines lock for "default-k8s-diff-port-076578", held for 19.631903968s
	I1212 01:03:19.148688  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.149016  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:19.151497  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.151926  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.151954  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.152124  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152598  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152762  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152834  141884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:19.152892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.152952  141884 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:19.152972  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.155620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155694  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.155962  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156057  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.156114  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156123  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156316  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156327  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156469  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156583  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156619  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156826  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.156824  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.268001  141884 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:19.275696  141884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:19.426624  141884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:19.432842  141884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:19.432911  141884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:19.449082  141884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:19.449108  141884 start.go:495] detecting cgroup driver to use...
	I1212 01:03:19.449187  141884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:19.466543  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:19.482668  141884 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:19.482733  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:19.497124  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:19.512626  141884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:19.624948  141884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:19.779469  141884 docker.go:233] disabling docker service ...
	I1212 01:03:19.779545  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:19.794888  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:19.810497  141884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:19.954827  141884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:20.086435  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:20.100917  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:20.120623  141884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:20.120683  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.134353  141884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:20.134431  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.150373  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.165933  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.181524  141884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:20.196891  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.209752  141884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.228990  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.241553  141884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:20.251819  141884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:20.251883  141884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:20.267155  141884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:20.277683  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:20.427608  141884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:20.525699  141884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:20.525804  141884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:20.530984  141884 start.go:563] Will wait 60s for crictl version
	I1212 01:03:20.531055  141884 ssh_runner.go:195] Run: which crictl
	I1212 01:03:20.535013  141884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:20.576177  141884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:20.576251  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.605529  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.638175  141884 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:20.639475  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:20.642566  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643001  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:20.643034  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643196  141884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:20.647715  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:20.662215  141884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:20.662337  141884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:20.662381  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:20.705014  141884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:20.705112  141884 ssh_runner.go:195] Run: which lz4
	I1212 01:03:20.709477  141884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:20.714111  141884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:20.714145  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:19.666527  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:21.666676  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:24.165316  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:22.285157  141884 crio.go:462] duration metric: took 1.575709911s to copy over tarball
	I1212 01:03:22.285258  141884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:24.495814  141884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210502234s)
	I1212 01:03:24.495848  141884 crio.go:469] duration metric: took 2.210655432s to extract the tarball
	I1212 01:03:24.495857  141884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:24.533396  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:24.581392  141884 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:24.581419  141884 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:24.581428  141884 kubeadm.go:934] updating node { 192.168.39.174 8444 v1.31.2 crio true true} ...
	I1212 01:03:24.581524  141884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-076578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:24.581594  141884 ssh_runner.go:195] Run: crio config
	I1212 01:03:24.625042  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:24.625073  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:24.625083  141884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:24.625111  141884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-076578 NodeName:default-k8s-diff-port-076578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:24.625238  141884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-076578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:24.625313  141884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:24.635818  141884 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:24.635903  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:24.645966  141884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:03:24.665547  141884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:24.682639  141884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1212 01:03:24.700147  141884 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:24.704172  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:24.716697  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:24.842374  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:24.860641  141884 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578 for IP: 192.168.39.174
	I1212 01:03:24.860676  141884 certs.go:194] generating shared ca certs ...
	I1212 01:03:24.860700  141884 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:24.860888  141884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:24.860955  141884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:24.860970  141884 certs.go:256] generating profile certs ...
	I1212 01:03:24.861110  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.key
	I1212 01:03:24.861200  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key.4a68806a
	I1212 01:03:24.861251  141884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key
	I1212 01:03:24.861391  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:24.861444  141884 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:24.861458  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:24.861498  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:24.861535  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:24.861565  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:24.861629  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:24.862588  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:24.899764  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:24.950373  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:24.983222  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:25.017208  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 01:03:25.042653  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:25.071358  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:25.097200  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:25.122209  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:25.150544  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:25.181427  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:25.210857  141884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:25.229580  141884 ssh_runner.go:195] Run: openssl version
	I1212 01:03:25.236346  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:25.247510  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252355  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252407  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.258511  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:25.272698  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:25.289098  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295737  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295806  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.304133  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:25.315805  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:25.328327  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333482  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333539  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.339367  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:25.351612  141884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:25.357060  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:25.363452  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:25.369984  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:25.376434  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:25.382895  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:25.389199  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:25.395232  141884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:25.395325  141884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:25.395370  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.439669  141884 cri.go:89] found id: ""
	I1212 01:03:25.439749  141884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:25.453870  141884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:25.453893  141884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:25.453951  141884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:25.464552  141884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:25.465609  141884 kubeconfig.go:125] found "default-k8s-diff-port-076578" server: "https://192.168.39.174:8444"
	I1212 01:03:25.467767  141884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:25.477907  141884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I1212 01:03:25.477943  141884 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:25.477958  141884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:25.478018  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.521891  141884 cri.go:89] found id: ""
	I1212 01:03:25.521978  141884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:25.539029  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:25.549261  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:25.549283  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:25.549341  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:03:25.558948  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:25.559022  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:25.568947  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:03:25.579509  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:25.579614  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:25.589573  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.600434  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:25.600498  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.610337  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:03:25.619956  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:25.620014  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:25.631231  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:25.641366  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:25.761159  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:26.165525  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:28.168457  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.168492  141469 pod_ready.go:82] duration metric: took 10.510517291s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.168506  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175334  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.175361  141469 pod_ready.go:82] duration metric: took 6.84531ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175375  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183060  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.183093  141469 pod_ready.go:82] duration metric: took 7.709158ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183106  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.190999  141469 pod_ready.go:93] pod "kube-proxy-9f6lj" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.191028  141469 pod_ready.go:82] duration metric: took 7.913069ms for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.191040  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199945  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.199972  141469 pod_ready.go:82] duration metric: took 8.923682ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199984  141469 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:26.830271  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069070962s)
	I1212 01:03:26.830326  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.035935  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.113317  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.210226  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:27.210329  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:27.710504  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.211114  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.242967  141884 api_server.go:72] duration metric: took 1.032736901s to wait for apiserver process to appear ...
	I1212 01:03:28.243012  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:28.243038  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:28.243643  141884 api_server.go:269] stopped: https://192.168.39.174:8444/healthz: Get "https://192.168.39.174:8444/healthz": dial tcp 192.168.39.174:8444: connect: connection refused
	I1212 01:03:28.743921  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.546075  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.546113  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.546129  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.621583  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.621619  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.743860  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.750006  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:31.750052  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.243382  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.269990  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.270033  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.743516  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.752979  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.753012  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:33.243571  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:33.247902  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:03:33.253786  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:33.253810  141884 api_server.go:131] duration metric: took 5.010790107s to wait for apiserver health ...
	I1212 01:03:33.253820  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:33.253826  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:33.255762  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:30.208396  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:32.708024  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:33.257007  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:33.267934  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:33.286197  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:33.297934  141884 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:33.297982  141884 system_pods.go:61] "coredns-7c65d6cfc9-xn886" [db1f42f1-93d9-4942-813d-e3de1cc24801] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:33.297995  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [25555578-8169-4986-aa10-06a442152c50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:33.298006  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [1004c64c-91ca-43c3-9c3d-43dab13d3812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:33.298023  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [63d42313-4ea9-44f9-a8eb-b0c6c73424c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:33.298039  141884 system_pods.go:61] "kube-proxy-7frgh" [191ed421-4297-47c7-a46d-407a8eaa0378] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:33.298049  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [1506a505-697c-4b80-b7ef-55de1116fa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:33.298060  141884 system_pods.go:61] "metrics-server-6867b74b74-k9s7n" [806badc0-b609-421f-9203-3fd91212a145] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:33.298077  141884 system_pods.go:61] "storage-provisioner" [bc133673-b7e2-42b2-98ac-e3284c9162ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:33.298090  141884 system_pods.go:74] duration metric: took 11.875762ms to wait for pod list to return data ...
	I1212 01:03:33.298105  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:33.302482  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:33.302517  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:33.302532  141884 node_conditions.go:105] duration metric: took 4.418219ms to run NodePressure ...
	I1212 01:03:33.302555  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:33.728028  141884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735780  141884 kubeadm.go:739] kubelet initialised
	I1212 01:03:33.735810  141884 kubeadm.go:740] duration metric: took 7.738781ms waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735824  141884 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:33.743413  141884 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:35.750012  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.348909  141411 start.go:364] duration metric: took 54.693436928s to acquireMachinesLock for "no-preload-242725"
	I1212 01:03:39.348976  141411 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:39.348990  141411 fix.go:54] fixHost starting: 
	I1212 01:03:39.349442  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:39.349485  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:39.367203  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1212 01:03:39.367584  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:39.368158  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:03:39.368185  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:39.368540  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:39.368717  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:39.368854  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:03:39.370433  141411 fix.go:112] recreateIfNeeded on no-preload-242725: state=Stopped err=<nil>
	I1212 01:03:39.370460  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	W1212 01:03:39.370594  141411 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:39.372621  141411 out.go:177] * Restarting existing kvm2 VM for "no-preload-242725" ...
	I1212 01:03:35.206417  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.208384  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:37.750453  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.752224  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.374093  141411 main.go:141] libmachine: (no-preload-242725) Calling .Start
	I1212 01:03:39.374303  141411 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 01:03:39.375021  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 01:03:39.375456  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 01:03:39.375951  141411 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 01:03:39.376726  141411 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 01:03:40.703754  141411 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 01:03:40.705274  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.705752  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.705821  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.705709  143226 retry.go:31] will retry after 196.576482ms: waiting for machine to come up
	I1212 01:03:40.904341  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.904718  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.904740  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.904669  143226 retry.go:31] will retry after 375.936901ms: waiting for machine to come up
	I1212 01:03:41.282278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.282839  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.282871  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.282793  143226 retry.go:31] will retry after 427.731576ms: waiting for machine to come up
	I1212 01:03:41.712553  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.713198  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.713231  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.713084  143226 retry.go:31] will retry after 421.07445ms: waiting for machine to come up
	I1212 01:03:39.707174  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:41.711103  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.207685  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:41.768335  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.252708  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.754333  141884 pod_ready.go:93] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.754362  141884 pod_ready.go:82] duration metric: took 11.010925207s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.754371  141884 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760121  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.760142  141884 pod_ready.go:82] duration metric: took 5.764171ms for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760151  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765554  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.765575  141884 pod_ready.go:82] duration metric: took 5.417017ms for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765589  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:42.135878  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.136341  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.136367  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.136284  143226 retry.go:31] will retry after 477.81881ms: waiting for machine to come up
	I1212 01:03:42.616400  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.616906  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.616929  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.616858  143226 retry.go:31] will retry after 597.608319ms: waiting for machine to come up
	I1212 01:03:43.215837  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:43.216430  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:43.216454  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:43.216363  143226 retry.go:31] will retry after 1.118837214s: waiting for machine to come up
	I1212 01:03:44.336666  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:44.337229  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:44.337253  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:44.337187  143226 retry.go:31] will retry after 1.008232952s: waiting for machine to come up
	I1212 01:03:45.346868  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:45.347386  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:45.347423  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:45.347307  143226 retry.go:31] will retry after 1.735263207s: waiting for machine to come up
	I1212 01:03:47.084570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:47.084980  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:47.085012  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:47.084931  143226 retry.go:31] will retry after 1.662677797s: waiting for machine to come up
	I1212 01:03:46.208324  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.707694  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:48.069316  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.069362  141884 pod_ready.go:82] duration metric: took 3.303763458s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.069380  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328758  141884 pod_ready.go:93] pod "kube-proxy-7frgh" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.328784  141884 pod_ready.go:82] duration metric: took 259.396178ms for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328798  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337082  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.337106  141884 pod_ready.go:82] duration metric: took 8.298777ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337119  141884 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:50.343458  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.748914  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:48.749510  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:48.749535  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:48.749475  143226 retry.go:31] will retry after 2.670904101s: waiting for machine to come up
	I1212 01:03:51.421499  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:51.421915  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:51.421961  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:51.421862  143226 retry.go:31] will retry after 3.566697123s: waiting for machine to come up
	I1212 01:03:50.708435  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:53.207675  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.344098  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.344166  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:56.345835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.990515  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:54.990916  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:54.990941  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:54.990869  143226 retry.go:31] will retry after 4.288131363s: waiting for machine to come up
	I1212 01:03:55.706167  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:57.707796  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.843944  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.844210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:59.284312  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.284807  141411 main.go:141] libmachine: (no-preload-242725) Found IP for machine: 192.168.61.222
	I1212 01:03:59.284834  141411 main.go:141] libmachine: (no-preload-242725) Reserving static IP address...
	I1212 01:03:59.284851  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has current primary IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.285300  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.285334  141411 main.go:141] libmachine: (no-preload-242725) DBG | skip adding static IP to network mk-no-preload-242725 - found existing host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"}
	I1212 01:03:59.285357  141411 main.go:141] libmachine: (no-preload-242725) Reserved static IP address: 192.168.61.222
	I1212 01:03:59.285376  141411 main.go:141] libmachine: (no-preload-242725) Waiting for SSH to be available...
	I1212 01:03:59.285390  141411 main.go:141] libmachine: (no-preload-242725) DBG | Getting to WaitForSSH function...
	I1212 01:03:59.287532  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287840  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.287869  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287970  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH client type: external
	I1212 01:03:59.287998  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa (-rw-------)
	I1212 01:03:59.288043  141411 main.go:141] libmachine: (no-preload-242725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:59.288066  141411 main.go:141] libmachine: (no-preload-242725) DBG | About to run SSH command:
	I1212 01:03:59.288092  141411 main.go:141] libmachine: (no-preload-242725) DBG | exit 0
	I1212 01:03:59.415723  141411 main.go:141] libmachine: (no-preload-242725) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:59.416104  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 01:03:59.416755  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.419446  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.419848  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.419879  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.420182  141411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 01:03:59.420388  141411 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:59.420412  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:59.420637  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.422922  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423257  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.423278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423432  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.423626  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423787  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423918  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.424051  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.424222  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.424231  141411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:59.536768  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:59.536796  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537016  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:03:59.537042  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537234  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.539806  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540110  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.540141  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540337  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.540509  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540665  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540800  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.540973  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.541155  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.541171  141411 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-242725 && echo "no-preload-242725" | sudo tee /etc/hostname
	I1212 01:03:59.668244  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-242725
	
	I1212 01:03:59.668269  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.671021  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671353  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.671374  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671630  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.671851  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672000  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672160  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.672310  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.672485  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.672502  141411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-242725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-242725/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-242725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:59.792950  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:59.792985  141411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:59.793011  141411 buildroot.go:174] setting up certificates
	I1212 01:03:59.793024  141411 provision.go:84] configureAuth start
	I1212 01:03:59.793041  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.793366  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.796185  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796599  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.796638  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796783  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.799165  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799532  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.799558  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799711  141411 provision.go:143] copyHostCerts
	I1212 01:03:59.799780  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:59.799804  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:59.799869  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:59.800004  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:59.800015  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:59.800051  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:59.800144  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:59.800155  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:59.800182  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:59.800263  141411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.no-preload-242725 san=[127.0.0.1 192.168.61.222 localhost minikube no-preload-242725]
	I1212 01:03:59.987182  141411 provision.go:177] copyRemoteCerts
	I1212 01:03:59.987249  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:59.987290  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.989902  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990285  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.990317  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990520  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.990712  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.990856  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.990981  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.078289  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:00.103149  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:04:00.131107  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:04:00.159076  141411 provision.go:87] duration metric: took 366.034024ms to configureAuth
	I1212 01:04:00.159103  141411 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:04:00.159305  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:04:00.159401  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.162140  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162537  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.162570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162696  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.162864  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163016  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163124  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.163262  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.163436  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.163451  141411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:00.407729  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:00.407758  141411 machine.go:96] duration metric: took 987.35601ms to provisionDockerMachine
	I1212 01:04:00.407773  141411 start.go:293] postStartSetup for "no-preload-242725" (driver="kvm2")
	I1212 01:04:00.407787  141411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:00.407810  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.408186  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:00.408218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.410950  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411329  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.411360  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411585  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.411809  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.411981  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.412115  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.498221  141411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:00.502621  141411 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:04:00.502644  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:04:00.502705  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:04:00.502779  141411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:04:00.502863  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:00.512322  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:00.540201  141411 start.go:296] duration metric: took 132.410555ms for postStartSetup
	I1212 01:04:00.540250  141411 fix.go:56] duration metric: took 21.191260423s for fixHost
	I1212 01:04:00.540287  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.542631  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.542983  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.543011  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.543212  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.543393  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543556  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543702  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.543867  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.544081  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.544095  141411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:04:00.656532  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965440.609922961
	
	I1212 01:04:00.656560  141411 fix.go:216] guest clock: 1733965440.609922961
	I1212 01:04:00.656569  141411 fix.go:229] Guest: 2024-12-12 01:04:00.609922961 +0000 UTC Remote: 2024-12-12 01:04:00.540255801 +0000 UTC m=+358.475944555 (delta=69.66716ms)
	I1212 01:04:00.656597  141411 fix.go:200] guest clock delta is within tolerance: 69.66716ms
	I1212 01:04:00.656616  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 21.307670093s
	I1212 01:04:00.656644  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.656898  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:00.659345  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659694  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.659722  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659878  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660405  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660584  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660663  141411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:00.660731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.660751  141411 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:00.660771  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.663331  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663458  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663717  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663757  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663789  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663802  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663867  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664039  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664044  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664201  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664202  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664359  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664359  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.664490  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.777379  141411 ssh_runner.go:195] Run: systemctl --version
	I1212 01:04:00.783765  141411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:04:00.933842  141411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:04:00.941376  141411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:04:00.941441  141411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:04:00.958993  141411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:04:00.959021  141411 start.go:495] detecting cgroup driver to use...
	I1212 01:04:00.959084  141411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:04:00.977166  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:04:00.991166  141411 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:04:00.991231  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:04:01.004993  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:04:01.018654  141411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:04:01.136762  141411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:04:01.300915  141411 docker.go:233] disabling docker service ...
	I1212 01:04:01.301036  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:04:01.316124  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:04:01.329544  141411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:04:01.451034  141411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:04:01.583471  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:04:01.611914  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:04:01.632628  141411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:04:01.632706  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.644315  141411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:04:01.644384  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.656980  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.668295  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.679885  141411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:04:01.692032  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.703893  141411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.724486  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.737251  141411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:04:01.748955  141411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:04:01.749025  141411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:04:01.763688  141411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:04:01.773871  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:01.903690  141411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:04:02.006921  141411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:04:02.007013  141411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:04:02.013116  141411 start.go:563] Will wait 60s for crictl version
	I1212 01:04:02.013187  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.017116  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:04:02.061210  141411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:04:02.061304  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.093941  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.124110  141411 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:59.708028  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:01.709056  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:04.207527  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.845618  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.346194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:02.125647  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:02.128481  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.128914  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:02.128973  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.129205  141411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:04:02.133801  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:02.148892  141411 kubeadm.go:883] updating cluster {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:04:02.149001  141411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:04:02.149033  141411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:04:02.187762  141411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:04:02.187805  141411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:04:02.187934  141411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.187988  141411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.188025  141411 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.188070  141411 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.188118  141411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.188220  141411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.188332  141411 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 01:04:02.188501  141411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.189594  141411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.189674  141411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.189892  141411 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.190015  141411 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 01:04:02.190121  141411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.190152  141411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.190169  141411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.190746  141411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.372557  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.375185  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.389611  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.394581  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.396799  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.408346  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1212 01:04:02.413152  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.438165  141411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1212 01:04:02.438217  141411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.438272  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.518752  141411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1212 01:04:02.518804  141411 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.518856  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.556287  141411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1212 01:04:02.556329  141411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.556371  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569629  141411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1212 01:04:02.569671  141411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.569683  141411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1212 01:04:02.569721  141411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.569731  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569770  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667454  141411 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1212 01:04:02.667511  141411 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.667510  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.667532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.667549  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667632  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.667644  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.667671  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.683807  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.784024  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.797709  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.797836  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.797848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.797969  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.822411  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.880580  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.927305  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.928532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.928661  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.938172  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.973083  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:03.023699  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 01:04:03.023813  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.069822  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 01:04:03.069879  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 01:04:03.069920  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 01:04:03.069945  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:03.069973  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:03.069990  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:03.070037  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 01:04:03.070116  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:03.094188  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1212 01:04:03.094210  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094229  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1212 01:04:03.094249  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094285  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1212 01:04:03.094313  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1212 01:04:03.094379  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1212 01:04:03.094399  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 01:04:03.094480  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:04.469173  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.174822  141411 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.080313699s)
	I1212 01:04:05.174869  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1212 01:04:05.174899  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.08062641s)
	I1212 01:04:05.174928  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1212 01:04:05.174968  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.174994  141411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 01:04:05.175034  141411 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.175086  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:05.175038  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.179340  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:06.207626  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:08.706815  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.843908  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:07.654693  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.479543185s)
	I1212 01:04:07.654721  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1212 01:04:07.654743  141411 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.654775  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.475408038s)
	I1212 01:04:07.654848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:07.654784  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.699286  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:09.647620  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.948278157s)
	I1212 01:04:09.647642  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.992718083s)
	I1212 01:04:09.647662  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1212 01:04:09.647683  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:04:09.647686  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647734  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647776  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:09.652886  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 01:04:11.112349  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.464585062s)
	I1212 01:04:11.112384  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1212 01:04:11.112412  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.112462  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.206933  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.208623  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.844442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:14.845189  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.083753  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.971262547s)
	I1212 01:04:13.083788  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1212 01:04:13.083821  141411 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:13.083878  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:17.087777  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.003870257s)
	I1212 01:04:17.087818  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1212 01:04:17.087853  141411 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:17.087917  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:15.707981  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:18.207205  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.345467  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:19.845255  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:17.734979  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:04:17.735041  141411 cache_images.go:123] Successfully loaded all cached images
	I1212 01:04:17.735049  141411 cache_images.go:92] duration metric: took 15.547226992s to LoadCachedImages
	I1212 01:04:17.735066  141411 kubeadm.go:934] updating node { 192.168.61.222 8443 v1.31.2 crio true true} ...
	I1212 01:04:17.735209  141411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-242725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:04:17.735311  141411 ssh_runner.go:195] Run: crio config
	I1212 01:04:17.780826  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:17.780850  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:17.780859  141411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:04:17.780882  141411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.222 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-242725 NodeName:no-preload-242725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:04:17.781025  141411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-242725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:04:17.781091  141411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:04:17.792290  141411 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:04:17.792374  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:04:17.802686  141411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:04:17.819496  141411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:04:17.836164  141411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1212 01:04:17.855844  141411 ssh_runner.go:195] Run: grep 192.168.61.222	control-plane.minikube.internal$ /etc/hosts
	I1212 01:04:17.860034  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:17.874418  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:18.011357  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:04:18.028641  141411 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725 for IP: 192.168.61.222
	I1212 01:04:18.028666  141411 certs.go:194] generating shared ca certs ...
	I1212 01:04:18.028683  141411 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:04:18.028880  141411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:04:18.028940  141411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:04:18.028954  141411 certs.go:256] generating profile certs ...
	I1212 01:04:18.029088  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.key
	I1212 01:04:18.029164  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key.f2ca822e
	I1212 01:04:18.029235  141411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key
	I1212 01:04:18.029404  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:04:18.029438  141411 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:04:18.029449  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:04:18.029485  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:04:18.029517  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:04:18.029555  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:04:18.029621  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:18.030313  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:04:18.082776  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:04:18.116012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:04:18.147385  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:04:18.180861  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:04:18.225067  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:04:18.255999  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:04:18.280193  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:04:18.304830  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:04:18.329012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:04:18.355462  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:04:18.379991  141411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:04:18.397637  141411 ssh_runner.go:195] Run: openssl version
	I1212 01:04:18.403727  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:04:18.415261  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419809  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419885  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.425687  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:04:18.438938  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:04:18.452150  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457050  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457116  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.463151  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:04:18.476193  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:04:18.489034  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493916  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493969  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.500285  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:04:18.513016  141411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:04:18.517996  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:04:18.524465  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:04:18.530607  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:04:18.536857  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:04:18.542734  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:04:18.548786  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:04:18.554771  141411 kubeadm.go:392] StartCluster: {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:04:18.554897  141411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:04:18.554950  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.593038  141411 cri.go:89] found id: ""
	I1212 01:04:18.593131  141411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:04:18.604527  141411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:04:18.604550  141411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:04:18.604605  141411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:04:18.614764  141411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:04:18.616082  141411 kubeconfig.go:125] found "no-preload-242725" server: "https://192.168.61.222:8443"
	I1212 01:04:18.618611  141411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:04:18.628709  141411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.222
	I1212 01:04:18.628741  141411 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:04:18.628753  141411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:04:18.628814  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.673970  141411 cri.go:89] found id: ""
	I1212 01:04:18.674067  141411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:04:18.692603  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:04:18.704916  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:04:18.704940  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:04:18.704999  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:04:18.714952  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:04:18.715015  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:04:18.724982  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:04:18.734756  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:04:18.734817  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:04:18.744528  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.753898  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:04:18.753955  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.763929  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:04:18.773108  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:04:18.773153  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:04:18.782710  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:04:18.792750  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:18.902446  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.056638  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154145942s)
	I1212 01:04:20.056677  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.275475  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.348697  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.483317  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:04:20.483487  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.983704  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.484485  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.526353  141411 api_server.go:72] duration metric: took 1.043031812s to wait for apiserver process to appear ...
	I1212 01:04:21.526389  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:04:21.526415  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:20.207458  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:22.212936  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.362548  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.362574  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.362586  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.380904  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.380939  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.527174  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.533112  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:24.533146  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.026678  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.031368  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.031409  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.526576  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.532260  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.532297  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:26.026741  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:26.031841  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:04:26.038198  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:04:26.038228  141411 api_server.go:131] duration metric: took 4.511829936s to wait for apiserver health ...
	I1212 01:04:26.038240  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:26.038249  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:26.040150  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:04:22.343994  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:24.344818  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.346428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.041669  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:04:26.055010  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:04:26.076860  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:04:26.092122  141411 system_pods.go:59] 8 kube-system pods found
	I1212 01:04:26.092154  141411 system_pods.go:61] "coredns-7c65d6cfc9-7w9dc" [878bfb78-fae5-4e05-b0ae-362841eace85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:04:26.092163  141411 system_pods.go:61] "etcd-no-preload-242725" [ed97c029-7933-4f4e-ab6c-f514b963ce21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:04:26.092170  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [df66d12b-b847-4ef3-b610-5679ff50e8c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:04:26.092175  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [eb5bc914-4267-41e8-9b37-26b7d3da9f68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:04:26.092180  141411 system_pods.go:61] "kube-proxy-rjwps" [fccefb3e-a282-4f0e-9070-11cc95bca868] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:04:26.092185  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [139de4ad-468c-4f1b-becf-3708bcaa7c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:04:26.092190  141411 system_pods.go:61] "metrics-server-6867b74b74-xzkbn" [16e0364c-18f9-43c2-9394-bc8548ce9caa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:04:26.092194  141411 system_pods.go:61] "storage-provisioner" [06c3232e-011a-4aff-b3ca-81858355bef4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:04:26.092200  141411 system_pods.go:74] duration metric: took 15.315757ms to wait for pod list to return data ...
	I1212 01:04:26.092208  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:04:26.095691  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:04:26.095715  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:04:26.095725  141411 node_conditions.go:105] duration metric: took 3.513466ms to run NodePressure ...
	I1212 01:04:26.095742  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:26.389652  141411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398484  141411 kubeadm.go:739] kubelet initialised
	I1212 01:04:26.398513  141411 kubeadm.go:740] duration metric: took 8.824036ms waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398524  141411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:04:26.406667  141411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.416093  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416137  141411 pod_ready.go:82] duration metric: took 9.418311ms for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.416151  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416165  141411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.422922  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422951  141411 pod_ready.go:82] duration metric: took 6.774244ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.422962  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422971  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.429822  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429854  141411 pod_ready.go:82] duration metric: took 6.874602ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.429866  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429875  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.483542  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483578  141411 pod_ready.go:82] duration metric: took 53.690915ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.483609  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483622  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:24.707572  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:27.207073  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.843868  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.844684  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:28.081872  141411 pod_ready.go:93] pod "kube-proxy-rjwps" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:28.081901  141411 pod_ready.go:82] duration metric: took 1.598267411s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:28.081921  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:30.088965  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:32.099574  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:29.706557  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:31.706767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:33.706983  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.344074  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.345401  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:34.588690  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:34.588715  141411 pod_ready.go:82] duration metric: took 6.50678624s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:34.588727  141411 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:36.596475  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:36.207357  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:38.207516  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.844602  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.845115  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.095215  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:41.594487  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.708001  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:42.708477  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.344150  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.844336  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:43.595231  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:46.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.708857  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:47.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.207408  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.844848  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.344941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:48.595492  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.095830  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.706544  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.843857  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:54.345207  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.095934  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.598377  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.706720  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.707883  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:56.843562  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.843654  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:01.343428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.095640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.596162  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.207281  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:02.706000  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:03.344025  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.844236  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:03.095197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.096865  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:04.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.208202  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:08.343968  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:10.844912  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.595940  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.596206  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.094527  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.707469  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.206124  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.206285  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:13.343045  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.343706  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.096430  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.097196  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.207697  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:17.344863  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:19.348835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.595465  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:20.708087  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:22.708298  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:21.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:24.344442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:26.344638  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:23.095266  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.096246  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.097041  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.208225  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.706184  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:28.850925  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.344959  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.595032  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.595989  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.706696  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:32.206728  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.206974  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:33.844343  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.343847  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.096631  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.594963  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.707213  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:39.207473  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:38.842756  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.845163  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:38.595482  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.595779  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:41.707367  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:44.206693  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:43.343827  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.346793  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:43.095993  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.096437  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.206828  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:48.708067  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.843200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:49.844564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:47.595931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:50.095312  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.096092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:51.206349  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:53.206853  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.344518  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.344667  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.594738  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:56.595036  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:57.207827  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.845550  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.344473  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:58.595556  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:00.595723  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.706748  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.709131  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:01.842981  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.843341  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.843811  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:02.597066  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:04.597793  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:07.095874  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:06.206955  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:08.208809  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:07.844408  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.343200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:09.595982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:12.094513  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.708379  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:13.207302  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:12.343941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.843333  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.095296  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:16.596411  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:15.706990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.707150  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.344804  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.347642  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.096235  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.594712  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.707303  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.707482  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:24.208937  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:21.842975  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.845498  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.343574  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.596224  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.094625  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.707610  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:29.208425  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:28.843986  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.343502  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:28.096043  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.594875  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.707976  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:34.208061  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:33.843106  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.344148  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:33.095175  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.095369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:37.095797  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.707296  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.207875  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:38.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.343589  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.594883  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.594919  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.707035  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.707860  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:43.346470  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.845269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.595640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.596624  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.708204  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:48.206947  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:48.344831  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.345010  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:47.596708  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.094517  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:52.094931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.706903  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:53.207824  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:52.843114  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.845370  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.095381  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:56.097745  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:55.207916  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.708907  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.344917  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:59.844269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:58.595415  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:01.095794  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.208708  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:02.707173  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:02.342899  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:04.343072  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:03.596239  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.094842  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:05.206813  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:07.209422  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 01:07:08.842564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.843885  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:08.095189  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.096580  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:09.707537  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.205862  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.207175  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:12.844629  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:15.344452  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.594761  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.595360  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:17.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.707535  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.208767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:17.851516  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:20.343210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.596696  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.095982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:21.706245  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:23.707282  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:22.344482  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.844735  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.594999  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:26.595500  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:25.707950  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.200781  141469 pod_ready.go:82] duration metric: took 4m0.000776844s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:28.200837  141469 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:28.200866  141469 pod_ready.go:39] duration metric: took 4m15.556500045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:28.200916  141469 kubeadm.go:597] duration metric: took 4m22.571399912s to restartPrimaryControlPlane
	W1212 01:07:28.201043  141469 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:28.201086  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:27.343233  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:29.344194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.596410  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.096062  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.843547  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.843911  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:36.343719  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.595369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:35.600094  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:38.844539  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.844997  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:38.096180  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:43.343026  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.343118  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:42.595856  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.096338  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.343485  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:48.337317  141884 pod_ready.go:82] duration metric: took 4m0.000178627s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:48.337358  141884 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:48.337386  141884 pod_ready.go:39] duration metric: took 4m14.601527023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:48.337421  141884 kubeadm.go:597] duration metric: took 4m22.883520304s to restartPrimaryControlPlane
	W1212 01:07:48.337486  141884 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:48.337526  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:47.595092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:50.096774  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.514069  141469 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312952103s)
	I1212 01:07:54.514153  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:54.543613  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:54.555514  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:54.569001  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:54.569024  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:54.569082  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:54.583472  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:54.583553  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:54.598721  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:54.614369  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:54.614451  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:54.625630  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.643317  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:54.643398  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.652870  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:54.662703  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:54.662774  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:54.672601  141469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:54.722949  141469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:07:54.723064  141469 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:54.845332  141469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:54.845476  141469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:54.845623  141469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:07:54.855468  141469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:54.857558  141469 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:54.857689  141469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:54.857774  141469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:54.857890  141469 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:54.857960  141469 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:54.858038  141469 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:54.858109  141469 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:54.858214  141469 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:54.858296  141469 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:54.858396  141469 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:54.858503  141469 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:54.858557  141469 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:54.858643  141469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:55.129859  141469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:55.274235  141469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:07:55.401999  141469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:56.015091  141469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:56.123268  141469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:56.123820  141469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:56.126469  141469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:52.595027  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:57.096606  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:56.128185  141469 out.go:235]   - Booting up control plane ...
	I1212 01:07:56.128343  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:56.128478  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:56.128577  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:56.149476  141469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:56.156042  141469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:56.156129  141469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:56.292423  141469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:07:56.292567  141469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:07:56.794594  141469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.027526ms
	I1212 01:07:56.794711  141469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:07:59.594764  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:01.595663  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:02.299386  141469 kubeadm.go:310] [api-check] The API server is healthy after 5.503154599s
	I1212 01:08:02.311549  141469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:02.326944  141469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:02.354402  141469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:02.354661  141469 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-607268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:02.368168  141469 kubeadm.go:310] [bootstrap-token] Using token: 0eo07f.wy46ulxfywwd0uy8
	I1212 01:08:02.369433  141469 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:02.369569  141469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:02.381945  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:02.407880  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:02.419211  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:02.426470  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:02.437339  141469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:02.708518  141469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:03.143189  141469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:03.704395  141469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:03.705460  141469 kubeadm.go:310] 
	I1212 01:08:03.705557  141469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:03.705576  141469 kubeadm.go:310] 
	I1212 01:08:03.705646  141469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:03.705650  141469 kubeadm.go:310] 
	I1212 01:08:03.705672  141469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:03.705724  141469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:03.705768  141469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:03.705800  141469 kubeadm.go:310] 
	I1212 01:08:03.705906  141469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:03.705918  141469 kubeadm.go:310] 
	I1212 01:08:03.705976  141469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:03.705987  141469 kubeadm.go:310] 
	I1212 01:08:03.706073  141469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:03.706191  141469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:03.706286  141469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:03.706307  141469 kubeadm.go:310] 
	I1212 01:08:03.706438  141469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:03.706549  141469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:03.706556  141469 kubeadm.go:310] 
	I1212 01:08:03.706670  141469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.706833  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:03.706864  141469 kubeadm.go:310] 	--control-plane 
	I1212 01:08:03.706869  141469 kubeadm.go:310] 
	I1212 01:08:03.706951  141469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:03.706963  141469 kubeadm.go:310] 
	I1212 01:08:03.707035  141469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.707134  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:03.708092  141469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:03.708135  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:08:03.708146  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:03.709765  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:03.711315  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:03.724767  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:03.749770  141469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:03.749830  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:03.749896  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-607268 minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=embed-certs-607268 minikube.k8s.io/primary=true
	I1212 01:08:03.973050  141469 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:03.973436  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.094838  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:06.095216  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:04.473952  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.974222  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.473799  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.974261  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.473492  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.974288  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.474064  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.974218  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:08.081567  141469 kubeadm.go:1113] duration metric: took 4.331794716s to wait for elevateKubeSystemPrivileges
	I1212 01:08:08.081603  141469 kubeadm.go:394] duration metric: took 5m2.502707851s to StartCluster
	I1212 01:08:08.081629  141469 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.081722  141469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:08.083443  141469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.083783  141469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:08.083894  141469 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:08.084015  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:08.084027  141469 addons.go:69] Setting metrics-server=true in profile "embed-certs-607268"
	I1212 01:08:08.084045  141469 addons.go:234] Setting addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:08.084014  141469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-607268"
	I1212 01:08:08.084054  141469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-607268"
	I1212 01:08:08.084083  141469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-607268"
	I1212 01:08:08.084085  141469 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-607268"
	W1212 01:08:08.084130  141469 addons.go:243] addon storage-provisioner should already be in state true
	W1212 01:08:08.084057  141469 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084618  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084658  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084671  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084684  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084617  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084756  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.085205  141469 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:08.086529  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:08.104090  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I1212 01:08:08.104115  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I1212 01:08:08.104092  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1212 01:08:08.104662  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104701  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104785  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105323  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105329  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105337  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105382  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105696  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105718  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105700  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.106132  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106163  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.106364  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.106599  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106626  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.110390  141469 addons.go:234] Setting addon default-storageclass=true in "embed-certs-607268"
	W1212 01:08:08.110415  141469 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:08.110447  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.110811  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.110844  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.124380  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1212 01:08:08.124888  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.125447  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.125472  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.125764  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.125966  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.126885  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1212 01:08:08.127417  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.127718  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1212 01:08:08.127911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.127990  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128002  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.128161  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.128338  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.128541  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.128612  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128626  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.129037  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.129640  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.129678  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.129905  141469 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:08.131337  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:08.131367  141469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:08.131387  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.131816  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.133335  141469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:08.134372  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.134696  141469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.134714  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:08.134734  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.134851  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.134868  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.135026  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.135247  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.135405  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.135549  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.137253  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137705  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.137725  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137810  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.137911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.138065  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.138162  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.146888  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I1212 01:08:08.147344  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.147919  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.147937  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.148241  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.148418  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.150018  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.150282  141469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.150299  141469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:08.150318  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.152881  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153311  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.153327  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.153344  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153509  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.153634  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.153816  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.301991  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:08.323794  141469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338205  141469 node_ready.go:49] node "embed-certs-607268" has status "Ready":"True"
	I1212 01:08:08.338241  141469 node_ready.go:38] duration metric: took 14.401624ms for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338255  141469 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:08.355801  141469 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:08.406624  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:08.406648  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:08.409497  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.456893  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:08.456917  141469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:08.554996  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.558767  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.558793  141469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:08.614574  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.702483  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702513  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.702818  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.702883  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.702894  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.702904  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702912  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.703142  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.703186  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.703163  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.714426  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.714450  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.714840  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.714857  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.821732  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266688284s)
	I1212 01:08:09.821807  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.821824  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822160  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822185  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.822211  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.822225  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822487  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.822518  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822535  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842157  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.227536232s)
	I1212 01:08:09.842222  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842237  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.842627  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.842663  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.842672  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842679  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842687  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.843002  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.843013  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.843028  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.843046  141469 addons.go:475] Verifying addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:09.844532  141469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:08.098516  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:10.596197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:09.845721  141469 addons.go:510] duration metric: took 1.761839241s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:10.400164  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:12.862616  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:14.362448  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.362473  141469 pod_ready.go:82] duration metric: took 6.006632075s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.362486  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868198  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.868220  141469 pod_ready.go:82] duration metric: took 505.72656ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868231  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872557  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.872582  141469 pod_ready.go:82] duration metric: took 4.343797ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872599  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876837  141469 pod_ready.go:93] pod "kube-proxy-6hw4b" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.876858  141469 pod_ready.go:82] duration metric: took 4.251529ms for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876867  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881467  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.881487  141469 pod_ready.go:82] duration metric: took 4.612567ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881496  141469 pod_ready.go:39] duration metric: took 6.543228562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:14.881516  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:14.881571  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.898899  141469 api_server.go:72] duration metric: took 6.815070313s to wait for apiserver process to appear ...
	I1212 01:08:14.898942  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:14.898963  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:08:14.904555  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:08:14.905738  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:14.905762  141469 api_server.go:131] duration metric: took 6.812513ms to wait for apiserver health ...
	I1212 01:08:14.905771  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:14.964381  141469 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:14.964413  141469 system_pods.go:61] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:14.964418  141469 system_pods.go:61] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:14.964422  141469 system_pods.go:61] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:14.964426  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:14.964429  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:14.964432  141469 system_pods.go:61] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:14.964435  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:14.964441  141469 system_pods.go:61] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:14.964447  141469 system_pods.go:61] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:14.964460  141469 system_pods.go:74] duration metric: took 58.68072ms to wait for pod list to return data ...
	I1212 01:08:14.964476  141469 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:15.161106  141469 default_sa.go:45] found service account: "default"
	I1212 01:08:15.161137  141469 default_sa.go:55] duration metric: took 196.651344ms for default service account to be created ...
	I1212 01:08:15.161147  141469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:15.363429  141469 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:15.363457  141469 system_pods.go:89] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:15.363462  141469 system_pods.go:89] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:15.363466  141469 system_pods.go:89] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:15.363470  141469 system_pods.go:89] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:15.363473  141469 system_pods.go:89] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:15.363477  141469 system_pods.go:89] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:15.363480  141469 system_pods.go:89] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:15.363487  141469 system_pods.go:89] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:15.363492  141469 system_pods.go:89] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:15.363501  141469 system_pods.go:126] duration metric: took 202.347796ms to wait for k8s-apps to be running ...
	I1212 01:08:15.363508  141469 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:15.363553  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:15.378498  141469 system_svc.go:56] duration metric: took 14.977368ms WaitForService to wait for kubelet
	I1212 01:08:15.378527  141469 kubeadm.go:582] duration metric: took 7.294704666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:15.378545  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:15.561384  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:15.561408  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:15.561422  141469 node_conditions.go:105] duration metric: took 182.869791ms to run NodePressure ...
	I1212 01:08:15.561435  141469 start.go:241] waiting for startup goroutines ...
	I1212 01:08:15.561442  141469 start.go:246] waiting for cluster config update ...
	I1212 01:08:15.561453  141469 start.go:255] writing updated cluster config ...
	I1212 01:08:15.561693  141469 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:15.615106  141469 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:15.617073  141469 out.go:177] * Done! kubectl is now configured to use "embed-certs-607268" cluster and "default" namespace by default
	I1212 01:08:14.771660  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.434092304s)
	I1212 01:08:14.771750  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:14.802721  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:08:14.813349  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:08:14.826608  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:08:14.826637  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:08:14.826693  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:08:14.842985  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:08:14.843060  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:08:14.855326  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:08:14.872371  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:08:14.872449  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:08:14.883793  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.894245  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:08:14.894306  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.906163  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:08:14.915821  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:08:14.915867  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:08:14.926019  141884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:08:15.092424  141884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:13.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:15.096259  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:17.596953  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:20.095957  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:22.096970  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:23.562216  141884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:08:23.562302  141884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:08:23.562463  141884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:08:23.562655  141884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:08:23.562786  141884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:08:23.562870  141884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:08:23.564412  141884 out.go:235]   - Generating certificates and keys ...
	I1212 01:08:23.564519  141884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:08:23.564605  141884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:08:23.564718  141884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:08:23.564802  141884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:08:23.564879  141884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:08:23.564925  141884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:08:23.565011  141884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:08:23.565110  141884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:08:23.565230  141884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:08:23.565352  141884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:08:23.565393  141884 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:08:23.565439  141884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:08:23.565485  141884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:08:23.565537  141884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:08:23.565582  141884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:08:23.565636  141884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:08:23.565700  141884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:08:23.565786  141884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:08:23.565885  141884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:08:23.567104  141884 out.go:235]   - Booting up control plane ...
	I1212 01:08:23.567195  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:08:23.567267  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:08:23.567353  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:08:23.567472  141884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:08:23.567579  141884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:08:23.567662  141884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:08:23.567812  141884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:08:23.567953  141884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:08:23.568010  141884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001996966s
	I1212 01:08:23.568071  141884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:08:23.568125  141884 kubeadm.go:310] [api-check] The API server is healthy after 5.001946459s
	I1212 01:08:23.568266  141884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:23.568424  141884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:23.568510  141884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:23.568702  141884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-076578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:23.568789  141884 kubeadm.go:310] [bootstrap-token] Using token: 472xql.x3zqihc9l5oj308m
	I1212 01:08:23.570095  141884 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:23.570226  141884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:23.570353  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:23.570550  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:23.570719  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:23.570880  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:23.571006  141884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:23.571186  141884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:23.571245  141884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:23.571322  141884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:23.571333  141884 kubeadm.go:310] 
	I1212 01:08:23.571411  141884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:23.571421  141884 kubeadm.go:310] 
	I1212 01:08:23.571530  141884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:23.571551  141884 kubeadm.go:310] 
	I1212 01:08:23.571609  141884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:23.571711  141884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:23.571795  141884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:23.571808  141884 kubeadm.go:310] 
	I1212 01:08:23.571892  141884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:23.571907  141884 kubeadm.go:310] 
	I1212 01:08:23.571985  141884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:23.571992  141884 kubeadm.go:310] 
	I1212 01:08:23.572069  141884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:23.572184  141884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:23.572276  141884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:23.572286  141884 kubeadm.go:310] 
	I1212 01:08:23.572413  141884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:23.572516  141884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:23.572525  141884 kubeadm.go:310] 
	I1212 01:08:23.572656  141884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.572805  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:23.572847  141884 kubeadm.go:310] 	--control-plane 
	I1212 01:08:23.572856  141884 kubeadm.go:310] 
	I1212 01:08:23.572973  141884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:23.572991  141884 kubeadm.go:310] 
	I1212 01:08:23.573107  141884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.573248  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:23.573273  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:08:23.573283  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:23.574736  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:23.575866  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:23.590133  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:23.613644  141884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:23.613737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:23.613759  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-076578 minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=default-k8s-diff-port-076578 minikube.k8s.io/primary=true
	I1212 01:08:23.642646  141884 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:23.831478  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.331749  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.832158  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.331630  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.831737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:26.331787  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.597126  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:27.095607  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:26.831860  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.331748  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.448891  141884 kubeadm.go:1113] duration metric: took 3.835231667s to wait for elevateKubeSystemPrivileges
	I1212 01:08:27.448930  141884 kubeadm.go:394] duration metric: took 5m2.053707834s to StartCluster
	I1212 01:08:27.448957  141884 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.449060  141884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:27.450918  141884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.451183  141884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:27.451263  141884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:27.451385  141884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451409  141884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451417  141884 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:08:27.451413  141884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451449  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:27.451454  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451465  141884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-076578"
	I1212 01:08:27.451423  141884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451570  141884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451586  141884 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:27.451648  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451876  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451905  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451927  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.451942  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452055  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.452096  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452939  141884 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:27.454521  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:27.467512  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1212 01:08:27.467541  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1212 01:08:27.467581  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1212 01:08:27.468032  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468069  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468039  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468580  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468592  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468604  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468609  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468620  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468635  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468968  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.469191  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.469562  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469579  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469613  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.469623  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.472898  141884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.472925  141884 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:27.472956  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.473340  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.473389  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.485014  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I1212 01:08:27.485438  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.486058  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.486077  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.486629  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.486832  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.487060  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1212 01:08:27.487779  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.488503  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.488527  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.488910  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.489132  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.489304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.489892  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1212 01:08:27.490599  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.490758  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.491213  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.491236  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.491385  141884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:27.491606  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.492230  141884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:27.492375  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.492420  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.493368  141884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.493382  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:27.493397  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.493462  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:27.493468  141884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:27.493481  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.496807  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497273  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.497304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497474  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.497647  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.497691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497771  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.497922  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.498178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.498190  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.498288  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.498467  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.498634  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.498779  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.512025  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1212 01:08:27.512490  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.513168  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.513187  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.513474  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.513664  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.514930  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.515106  141884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.515119  141884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:27.515131  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.520051  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520084  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.520183  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520419  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.520574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.520737  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.520828  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.692448  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:27.712214  141884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724269  141884 node_ready.go:49] node "default-k8s-diff-port-076578" has status "Ready":"True"
	I1212 01:08:27.724301  141884 node_ready.go:38] duration metric: took 12.044784ms for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724313  141884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:27.729135  141884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:27.768566  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:27.768596  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:27.782958  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.797167  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:27.797190  141884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:27.828960  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:27.828983  141884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:27.871251  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.883614  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:28.198044  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198090  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198457  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198510  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198522  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.198532  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198544  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198817  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198815  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198844  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.277379  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.277405  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.277719  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.277741  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955418  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084128053s)
	I1212 01:08:28.955472  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955561  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071904294s)
	I1212 01:08:28.955624  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955646  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955856  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.955874  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955881  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955888  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.957731  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957740  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957748  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957761  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957802  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957814  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957823  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.957836  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.958072  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.958090  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.958100  141884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-076578"
	I1212 01:08:28.959879  141884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:28.961027  141884 addons.go:510] duration metric: took 1.509771178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:29.241061  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:29.241090  141884 pod_ready.go:82] duration metric: took 1.511925292s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:29.241106  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:31.247610  141884 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:29.095906  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:31.593942  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:33.246910  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.246933  141884 pod_ready.go:82] duration metric: took 4.005818542s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.246944  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753325  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.753350  141884 pod_ready.go:82] duration metric: took 506.39921ms for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753360  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758733  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.758759  141884 pod_ready.go:82] duration metric: took 5.391762ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758769  141884 pod_ready.go:39] duration metric: took 6.034446537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:33.758789  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:33.758854  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:33.774952  141884 api_server.go:72] duration metric: took 6.323732468s to wait for apiserver process to appear ...
	I1212 01:08:33.774976  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:33.774995  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:08:33.780463  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:08:33.781364  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:33.781387  141884 api_server.go:131] duration metric: took 6.404187ms to wait for apiserver health ...
	I1212 01:08:33.781396  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:33.786570  141884 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:33.786591  141884 system_pods.go:61] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.786596  141884 system_pods.go:61] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.786599  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.786603  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.786606  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.786610  141884 system_pods.go:61] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.786615  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.786623  141884 system_pods.go:61] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.786630  141884 system_pods.go:61] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.786643  141884 system_pods.go:74] duration metric: took 5.239236ms to wait for pod list to return data ...
	I1212 01:08:33.786655  141884 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:33.789776  141884 default_sa.go:45] found service account: "default"
	I1212 01:08:33.789794  141884 default_sa.go:55] duration metric: took 3.13371ms for default service account to be created ...
	I1212 01:08:33.789801  141884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:33.794118  141884 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:33.794139  141884 system_pods.go:89] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.794145  141884 system_pods.go:89] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.794149  141884 system_pods.go:89] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.794154  141884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.794157  141884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.794161  141884 system_pods.go:89] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.794165  141884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.794170  141884 system_pods.go:89] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.794177  141884 system_pods.go:89] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.794185  141884 system_pods.go:126] duration metric: took 4.378791ms to wait for k8s-apps to be running ...
	I1212 01:08:33.794194  141884 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:33.794233  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:33.809257  141884 system_svc.go:56] duration metric: took 15.051528ms WaitForService to wait for kubelet
	I1212 01:08:33.809290  141884 kubeadm.go:582] duration metric: took 6.358073584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:33.809323  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:33.813154  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:33.813174  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:33.813183  141884 node_conditions.go:105] duration metric: took 3.85493ms to run NodePressure ...
	I1212 01:08:33.813194  141884 start.go:241] waiting for startup goroutines ...
	I1212 01:08:33.813200  141884 start.go:246] waiting for cluster config update ...
	I1212 01:08:33.813210  141884 start.go:255] writing updated cluster config ...
	I1212 01:08:33.813474  141884 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:33.862511  141884 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:33.864367  141884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-076578" cluster and "default" namespace by default
	I1212 01:08:33.594621  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:34.589133  141411 pod_ready.go:82] duration metric: took 4m0.000384717s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	E1212 01:08:34.589166  141411 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:08:34.589184  141411 pod_ready.go:39] duration metric: took 4m8.190648334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:34.589214  141411 kubeadm.go:597] duration metric: took 4m15.984656847s to restartPrimaryControlPlane
	W1212 01:08:34.589299  141411 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:08:34.589327  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:00.919650  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.330292422s)
	I1212 01:09:00.919762  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:00.956649  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:09:00.976311  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:00.999339  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:00.999364  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:00.999413  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:01.013048  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:01.013112  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:01.027407  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:01.036801  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:01.036854  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:01.046865  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.056325  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:01.056390  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.066574  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:01.078080  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:01.078130  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:01.088810  141411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:01.249481  141411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:09.318633  141411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:09:09.318694  141411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:09:09.318789  141411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:09:09.318924  141411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:09:09.319074  141411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:09:09.319185  141411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:09:09.320615  141411 out.go:235]   - Generating certificates and keys ...
	I1212 01:09:09.320710  141411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:09:09.320803  141411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:09:09.320886  141411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:09:09.320957  141411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:09:09.321061  141411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:09:09.321118  141411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:09:09.321188  141411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:09:09.321249  141411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:09:09.321334  141411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:09:09.321442  141411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:09:09.321516  141411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:09:09.321611  141411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:09:09.321698  141411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:09:09.321775  141411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:09:09.321849  141411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:09:09.321924  141411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:09:09.321973  141411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:09:09.322099  141411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:09:09.322204  141411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:09:09.323661  141411 out.go:235]   - Booting up control plane ...
	I1212 01:09:09.323780  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:09:09.323864  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:09:09.323950  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:09:09.324082  141411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:09:09.324181  141411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:09:09.324255  141411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:09:09.324431  141411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:09:09.324571  141411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:09:09.324647  141411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.39943ms
	I1212 01:09:09.324730  141411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:09:09.324780  141411 kubeadm.go:310] [api-check] The API server is healthy after 5.001520724s
	I1212 01:09:09.324876  141411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:09:09.325036  141411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:09:09.325136  141411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:09:09.325337  141411 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-242725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:09:09.325401  141411 kubeadm.go:310] [bootstrap-token] Using token: k8uf20.0v0t2d7mhtmwxurz
	I1212 01:09:09.326715  141411 out.go:235]   - Configuring RBAC rules ...
	I1212 01:09:09.326840  141411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:09:09.326938  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:09:09.327149  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:09:09.327329  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:09:09.327498  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:09:09.327643  141411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:09:09.327787  141411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:09:09.327852  141411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:09:09.327926  141411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:09:09.327935  141411 kubeadm.go:310] 
	I1212 01:09:09.328027  141411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:09:09.328036  141411 kubeadm.go:310] 
	I1212 01:09:09.328138  141411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:09:09.328148  141411 kubeadm.go:310] 
	I1212 01:09:09.328183  141411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:09:09.328253  141411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:09:09.328302  141411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:09:09.328308  141411 kubeadm.go:310] 
	I1212 01:09:09.328396  141411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:09:09.328413  141411 kubeadm.go:310] 
	I1212 01:09:09.328478  141411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:09:09.328489  141411 kubeadm.go:310] 
	I1212 01:09:09.328554  141411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:09:09.328643  141411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:09:09.328719  141411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:09:09.328727  141411 kubeadm.go:310] 
	I1212 01:09:09.328797  141411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:09:09.328885  141411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:09:09.328894  141411 kubeadm.go:310] 
	I1212 01:09:09.328997  141411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329096  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:09:09.329120  141411 kubeadm.go:310] 	--control-plane 
	I1212 01:09:09.329126  141411 kubeadm.go:310] 
	I1212 01:09:09.329201  141411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:09:09.329209  141411 kubeadm.go:310] 
	I1212 01:09:09.329276  141411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329374  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:09:09.329386  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:09:09.329393  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:09:09.330870  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:09:09.332191  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:09:09.345593  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:09:09.366177  141411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:09:09.366234  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:09.366252  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-242725 minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=no-preload-242725 minikube.k8s.io/primary=true
	I1212 01:09:09.589709  141411 ops.go:34] apiserver oom_adj: -16
	I1212 01:09:09.589889  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.090703  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.590697  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.090698  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.590027  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.090413  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.590626  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.090322  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.590174  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.090032  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.233581  141411 kubeadm.go:1113] duration metric: took 4.867404479s to wait for elevateKubeSystemPrivileges
	I1212 01:09:14.233636  141411 kubeadm.go:394] duration metric: took 4m55.678870659s to StartCluster
	I1212 01:09:14.233674  141411 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.233790  141411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:09:14.236087  141411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.236385  141411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:09:14.236460  141411 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:09:14.236567  141411 addons.go:69] Setting storage-provisioner=true in profile "no-preload-242725"
	I1212 01:09:14.236583  141411 addons.go:69] Setting default-storageclass=true in profile "no-preload-242725"
	I1212 01:09:14.236610  141411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-242725"
	I1212 01:09:14.236611  141411 addons.go:69] Setting metrics-server=true in profile "no-preload-242725"
	I1212 01:09:14.236631  141411 addons.go:234] Setting addon metrics-server=true in "no-preload-242725"
	W1212 01:09:14.236646  141411 addons.go:243] addon metrics-server should already be in state true
	I1212 01:09:14.236682  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.236588  141411 addons.go:234] Setting addon storage-provisioner=true in "no-preload-242725"
	I1212 01:09:14.236687  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1212 01:09:14.236712  141411 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:09:14.236838  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.237093  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237141  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237185  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237101  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237227  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237235  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237863  141411 out.go:177] * Verifying Kubernetes components...
	I1212 01:09:14.239284  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:09:14.254182  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1212 01:09:14.254405  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I1212 01:09:14.254418  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 01:09:14.254742  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254857  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254874  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255388  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255415  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255439  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255803  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255814  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255807  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.256218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.256360  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256396  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.256524  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256567  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.259313  141411 addons.go:234] Setting addon default-storageclass=true in "no-preload-242725"
	W1212 01:09:14.259330  141411 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:09:14.259357  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.259575  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.259621  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.273148  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1212 01:09:14.273601  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.273909  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1212 01:09:14.274174  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274200  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274282  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.274560  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.274785  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274801  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274866  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.275126  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.275280  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.276840  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.277013  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.278945  141411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:09:14.279016  141411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.280219  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:09:14.280239  141411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:09:14.280268  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.280440  141411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.280450  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:09:14.280464  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.281368  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I1212 01:09:14.282054  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.282652  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.282673  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.283314  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.283947  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.283990  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.284230  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284232  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284802  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.284830  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285052  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285088  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.285106  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285247  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285458  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285483  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285619  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285624  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.285761  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285880  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.323872  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I1212 01:09:14.324336  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.324884  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.324906  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.325248  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.325437  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.326991  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.327217  141411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.327237  141411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:09:14.327258  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.330291  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.330895  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.330910  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.330926  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.331062  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.331219  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.331343  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.411182  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:09:14.454298  141411 node_ready.go:35] waiting up to 6m0s for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467328  141411 node_ready.go:49] node "no-preload-242725" has status "Ready":"True"
	I1212 01:09:14.467349  141411 node_ready.go:38] duration metric: took 13.017274ms for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467359  141411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:14.482865  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:14.557685  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.594366  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.602730  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:09:14.602760  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:09:14.666446  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:09:14.666474  141411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:09:14.746040  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.746075  141411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:09:14.799479  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.862653  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.862688  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863687  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.863706  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.863721  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.863730  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863740  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:14.863988  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.864007  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878604  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.878630  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.878903  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.878944  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878914  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.914665  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320255607s)
	I1212 01:09:15.914726  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.914741  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915158  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.915204  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915219  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:15.915236  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.915249  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915499  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915528  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.106582  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.307047373s)
	I1212 01:09:16.106635  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.106652  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107000  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107020  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107030  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.107037  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107298  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107317  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107328  141411 addons.go:475] Verifying addon metrics-server=true in "no-preload-242725"
	I1212 01:09:16.107305  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:16.108981  141411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:09:16.110608  141411 addons.go:510] duration metric: took 1.874161814s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:09:16.498983  141411 pod_ready.go:103] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:09:16.989762  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:16.989784  141411 pod_ready.go:82] duration metric: took 2.506893862s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:16.989795  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996560  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:17.996582  141411 pod_ready.go:82] duration metric: took 1.00678165s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996593  141411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002275  141411 pod_ready.go:93] pod "etcd-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.002294  141411 pod_ready.go:82] duration metric: took 5.694407ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002308  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006942  141411 pod_ready.go:93] pod "kube-apiserver-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.006965  141411 pod_ready.go:82] duration metric: took 4.650802ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006978  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011581  141411 pod_ready.go:93] pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.011621  141411 pod_ready.go:82] duration metric: took 4.634646ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011634  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187112  141411 pod_ready.go:93] pod "kube-proxy-5kc2s" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.187143  141411 pod_ready.go:82] duration metric: took 175.498685ms for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187156  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.586974  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.587003  141411 pod_ready.go:82] duration metric: took 399.836187ms for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.587012  141411 pod_ready.go:39] duration metric: took 4.119642837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:18.587032  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:09:18.587091  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:18.603406  141411 api_server.go:72] duration metric: took 4.366985373s to wait for apiserver process to appear ...
	I1212 01:09:18.603446  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:09:18.603473  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:09:18.609003  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:09:18.609950  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:09:18.609968  141411 api_server.go:131] duration metric: took 6.513408ms to wait for apiserver health ...
	I1212 01:09:18.609976  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:09:18.790460  141411 system_pods.go:59] 9 kube-system pods found
	I1212 01:09:18.790494  141411 system_pods.go:61] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:18.790502  141411 system_pods.go:61] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:18.790507  141411 system_pods.go:61] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:18.790510  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:18.790515  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:18.790520  141411 system_pods.go:61] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:18.790525  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:18.790534  141411 system_pods.go:61] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:18.790540  141411 system_pods.go:61] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:18.790556  141411 system_pods.go:74] duration metric: took 180.570066ms to wait for pod list to return data ...
	I1212 01:09:18.790566  141411 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:09:18.987130  141411 default_sa.go:45] found service account: "default"
	I1212 01:09:18.987172  141411 default_sa.go:55] duration metric: took 196.594497ms for default service account to be created ...
	I1212 01:09:18.987185  141411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:09:19.189233  141411 system_pods.go:86] 9 kube-system pods found
	I1212 01:09:19.189262  141411 system_pods.go:89] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:19.189267  141411 system_pods.go:89] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:19.189271  141411 system_pods.go:89] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:19.189274  141411 system_pods.go:89] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:19.189290  141411 system_pods.go:89] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:19.189294  141411 system_pods.go:89] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:19.189300  141411 system_pods.go:89] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:19.189308  141411 system_pods.go:89] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:19.189318  141411 system_pods.go:89] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:19.189331  141411 system_pods.go:126] duration metric: took 202.137957ms to wait for k8s-apps to be running ...
	I1212 01:09:19.189341  141411 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:09:19.189391  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:19.204241  141411 system_svc.go:56] duration metric: took 14.889522ms WaitForService to wait for kubelet
	I1212 01:09:19.204272  141411 kubeadm.go:582] duration metric: took 4.967858935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:09:19.204289  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:09:19.387735  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:09:19.387760  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:09:19.387768  141411 node_conditions.go:105] duration metric: took 183.47486ms to run NodePressure ...
	I1212 01:09:19.387780  141411 start.go:241] waiting for startup goroutines ...
	I1212 01:09:19.387787  141411 start.go:246] waiting for cluster config update ...
	I1212 01:09:19.387796  141411 start.go:255] writing updated cluster config ...
	I1212 01:09:19.388041  141411 ssh_runner.go:195] Run: rm -f paused
	I1212 01:09:19.437923  141411 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:09:19.439913  141411 out.go:177] * Done! kubectl is now configured to use "no-preload-242725" cluster and "default" namespace by default
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 
	
	
	==> CRI-O <==
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.613690714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966237613665793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f0c02d02-f247-4d35-a8fc-3db31610a8e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.614339671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eab8fad8-9112-4890-a462-b74f032c65f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.614407014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eab8fad8-9112-4890-a462-b74f032c65f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.614622833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eab8fad8-9112-4890-a462-b74f032c65f1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.660793319Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6345cc63-158d-4f38-a647-ad4f349e4753 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.660870557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6345cc63-158d-4f38-a647-ad4f349e4753 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.662341419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf1757b4-9c53-4bb4-84d8-30612521372f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.662741700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966237662721119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf1757b4-9c53-4bb4-84d8-30612521372f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.663613401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=514d4672-e807-41dd-b352-3ff5e541d4e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.663665618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=514d4672-e807-41dd-b352-3ff5e541d4e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.663868645Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=514d4672-e807-41dd-b352-3ff5e541d4e5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.703460033Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b229b1ae-44c3-40d6-a1e8-fe744786bff1 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.703550672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b229b1ae-44c3-40d6-a1e8-fe744786bff1 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.704873181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=531360b6-fb78-4d7d-b956-e7fba03e40e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.705317114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966237705294542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=531360b6-fb78-4d7d-b956-e7fba03e40e2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.706084566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cfc2ff93-03fb-43e3-8c58-5834ce97d82c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.706158709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cfc2ff93-03fb-43e3-8c58-5834ce97d82c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.706385711Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cfc2ff93-03fb-43e3-8c58-5834ce97d82c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.741714895Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9397938-236d-43be-8cad-c0bcd8b2eac3 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.741840173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9397938-236d-43be-8cad-c0bcd8b2eac3 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.743475096Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5547c11-907d-411d-a20b-5c319ef99f21 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.744167126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966237744133264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5547c11-907d-411d-a20b-5c319ef99f21 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.745583534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f20ff3b-c9d5-4d58-a89e-59ec6dd6de06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.745740897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f20ff3b-c9d5-4d58-a89e-59ec6dd6de06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:17 embed-certs-607268 crio[724]: time="2024-12-12 01:17:17.746282025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f20ff3b-c9d5-4d58-a89e-59ec6dd6de06 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df0b9bba2d3dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7bd5c6035c545       storage-provisioner
	8dbe54cf32496       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   3152ac5313cbe       coredns-7c65d6cfc9-m7b7f
	a661613ef780a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   f1872f87960f1       coredns-7c65d6cfc9-m27d6
	209fac6c58bc9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   cd10ed771a7e4       kube-proxy-6hw4b
	850c440c91002       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   e337ffcde9aa9       etcd-embed-certs-607268
	cae3fe867e45c       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   b2343493f8f8c       kube-scheduler-embed-certs-607268
	70fdf1afdd3c3       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   d38a210a7a58e       kube-controller-manager-embed-certs-607268
	f6066cda2bc1b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   d5a5e4fb984dc       kube-apiserver-embed-certs-607268
	a83f1d614dc5f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   5fc634b841058       kube-apiserver-embed-certs-607268
	
	
	==> coredns [8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-607268
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-607268
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=embed-certs-607268
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 01:08:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-607268
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 01:17:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 01:13:18 +0000   Thu, 12 Dec 2024 01:07:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 01:13:18 +0000   Thu, 12 Dec 2024 01:07:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 01:13:18 +0000   Thu, 12 Dec 2024 01:07:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 01:13:18 +0000   Thu, 12 Dec 2024 01:08:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.151
	  Hostname:    embed-certs-607268
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 082bfb4f2b144015b1981937ac6a2f95
	  System UUID:                082bfb4f-2b14-4015-b198-1937ac6a2f95
	  Boot ID:                    c66ba1f4-be69-4247-abed-b8d00f3658f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-m27d6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-m7b7f                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-embed-certs-607268                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-embed-certs-607268             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-embed-certs-607268    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-6hw4b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-embed-certs-607268             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-glcnv               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-607268 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m15s (x2 over 9m15s)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s (x2 over 9m15s)  kubelet          Node embed-certs-607268 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x2 over 9m15s)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m11s                  node-controller  Node embed-certs-607268 event: Registered Node embed-certs-607268 in Controller
	
	
	==> dmesg <==
	[  +0.052754] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.915455] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.755035] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635579] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.378417] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.056286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064501] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Dec12 01:03] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.176305] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.310752] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.281092] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.061656] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.925583] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +4.576462] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.631969] kauditd_printk_skb: 85 callbacks suppressed
	[Dec12 01:07] systemd-fstab-generator[2603]: Ignoring "noauto" option for root device
	[  +0.069949] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 01:08] systemd-fstab-generator[2920]: Ignoring "noauto" option for root device
	[  +0.069077] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.348118] systemd-fstab-generator[3052]: Ignoring "noauto" option for root device
	[  +0.109448] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.313533] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253] <==
	{"level":"info","ts":"2024-12-12T01:07:58.171966Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-12T01:07:58.172607Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"bb1641fc01920074","initial-advertise-peer-urls":["https://192.168.50.151:2380"],"listen-peer-urls":["https://192.168.50.151:2380"],"advertise-client-urls":["https://192.168.50.151:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.151:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-12T01:07:58.172706Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-12T01:07:58.172897Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.151:2380"}
	{"level":"info","ts":"2024-12-12T01:07:58.174289Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.151:2380"}
	{"level":"info","ts":"2024-12-12T01:07:58.992281Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb1641fc01920074 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-12T01:07:58.992385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb1641fc01920074 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-12T01:07:58.992440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb1641fc01920074 received MsgPreVoteResp from bb1641fc01920074 at term 1"}
	{"level":"info","ts":"2024-12-12T01:07:58.992482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb1641fc01920074 became candidate at term 2"}
	{"level":"info","ts":"2024-12-12T01:07:58.992507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb1641fc01920074 received MsgVoteResp from bb1641fc01920074 at term 2"}
	{"level":"info","ts":"2024-12-12T01:07:58.992533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb1641fc01920074 became leader at term 2"}
	{"level":"info","ts":"2024-12-12T01:07:58.992558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb1641fc01920074 elected leader bb1641fc01920074 at term 2"}
	{"level":"info","ts":"2024-12-12T01:07:58.994178Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"bb1641fc01920074","local-member-attributes":"{Name:embed-certs-607268 ClientURLs:[https://192.168.50.151:2379]}","request-path":"/0/members/bb1641fc01920074/attributes","cluster-id":"336dbfe96cdae58d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-12T01:07:58.995021Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:07:58.995256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:07:58.995455Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:07:58.997398Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:07:58.999132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.151:2379"}
	{"level":"info","ts":"2024-12-12T01:07:59.001015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T01:07:59.001051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-12T01:07:59.001704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:07:59.002642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-12T01:07:59.003018Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"336dbfe96cdae58d","local-member-id":"bb1641fc01920074","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:07:59.003104Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:07:59.003142Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:17:18 up 14 min,  0 users,  load average: 0.12, 0.15, 0.17
	Linux embed-certs-607268 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a] <==
	W1212 01:07:53.422465       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.433194       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.445855       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.554614       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.630654       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.711240       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.752077       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.767118       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.839788       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.853441       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.898160       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.947202       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.957283       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.051074       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.073197       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.090568       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.107465       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.136333       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.276206       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.314174       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.314410       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.329108       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.370190       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.403536       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.441668       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63] <==
	E1212 01:13:01.518520       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1212 01:13:01.518634       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:13:01.519780       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:13:01.519826       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:14:01.520576       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:14:01.520671       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:14:01.520876       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:14:01.521111       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:14:01.521946       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:14:01.523107       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:16:01.523135       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:16:01.523286       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:16:01.523421       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:16:01.523521       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:16:01.525218       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:16:01.526742       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a] <==
	E1212 01:12:07.413186       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:12:07.940871       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:12:37.419440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:12:37.951675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:13:07.426725       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:13:07.959738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:13:18.856431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-607268"
	E1212 01:13:37.432711       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:13:37.967465       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:13:59.064266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="227.374µs"
	E1212 01:14:07.440362       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:14:07.975369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:14:12.056003       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="242.96µs"
	E1212 01:14:37.447893       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:14:37.983743       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:15:07.457364       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:15:07.994286       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:15:37.464445       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:15:38.002766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:16:07.472261       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:16:08.012101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:16:37.479247       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:16:38.020755       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:17:07.486211       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:17:08.028375       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1212 01:08:09.396588       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1212 01:08:09.495958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.151"]
	E1212 01:08:09.496057       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:08:09.594651       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 01:08:09.601028       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:08:09.601095       1 server_linux.go:169] "Using iptables Proxier"
	I1212 01:08:09.605463       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:08:09.605710       1 server.go:483] "Version info" version="v1.31.2"
	I1212 01:08:09.605743       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:08:09.607328       1 config.go:199] "Starting service config controller"
	I1212 01:08:09.607369       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1212 01:08:09.607396       1 config.go:105] "Starting endpoint slice config controller"
	I1212 01:08:09.607400       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1212 01:08:09.607792       1 config.go:328] "Starting node config controller"
	I1212 01:08:09.607827       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1212 01:08:09.708416       1 shared_informer.go:320] Caches are synced for service config
	I1212 01:08:09.708471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1212 01:08:09.708276       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be] <==
	W1212 01:08:00.552575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:00.553361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:00.553398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 01:08:00.553425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:00.552526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:08:00.553473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.395647       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 01:08:01.395743       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1212 01:08:01.422540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 01:08:01.422639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.459712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:08:01.459771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.493498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 01:08:01.493584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.494644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:01.494697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.547126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 01:08:01.547205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.595649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:01.595943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.667362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 01:08:01.667429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.730797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 01:08:01.730852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1212 01:08:04.044056       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 01:16:03 embed-certs-607268 kubelet[2927]: E1212 01:16:03.194411    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966163194107206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:13 embed-certs-607268 kubelet[2927]: E1212 01:16:13.196042    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966173195532922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:13 embed-certs-607268 kubelet[2927]: E1212 01:16:13.196418    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966173195532922,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:16 embed-certs-607268 kubelet[2927]: E1212 01:16:16.040143    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:16:23 embed-certs-607268 kubelet[2927]: E1212 01:16:23.197855    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966183197412212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:23 embed-certs-607268 kubelet[2927]: E1212 01:16:23.197901    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966183197412212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:27 embed-certs-607268 kubelet[2927]: E1212 01:16:27.040554    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:16:33 embed-certs-607268 kubelet[2927]: E1212 01:16:33.200702    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966193200043671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:33 embed-certs-607268 kubelet[2927]: E1212 01:16:33.200979    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966193200043671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:42 embed-certs-607268 kubelet[2927]: E1212 01:16:42.039596    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:16:43 embed-certs-607268 kubelet[2927]: E1212 01:16:43.203028    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966203202250794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:43 embed-certs-607268 kubelet[2927]: E1212 01:16:43.203087    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966203202250794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:53 embed-certs-607268 kubelet[2927]: E1212 01:16:53.205132    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966213204631407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:53 embed-certs-607268 kubelet[2927]: E1212 01:16:53.205571    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966213204631407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:57 embed-certs-607268 kubelet[2927]: E1212 01:16:57.041110    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]: E1212 01:17:03.083835    2927 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]: E1212 01:17:03.207442    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966223206857545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:03 embed-certs-607268 kubelet[2927]: E1212 01:17:03.207469    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966223206857545,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:09 embed-certs-607268 kubelet[2927]: E1212 01:17:09.039484    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:17:13 embed-certs-607268 kubelet[2927]: E1212 01:17:13.210059    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966233209021021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:13 embed-certs-607268 kubelet[2927]: E1212 01:17:13.210107    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966233209021021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b] <==
	I1212 01:08:10.446880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 01:08:10.459601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 01:08:10.460218       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 01:08:10.475858       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 01:08:10.476062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-607268_176bbe4b-7797-4d5d-8558-62057adab84e!
	I1212 01:08:10.478820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c38675b-8920-41c0-a3b3-8c11ef2dcf86", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-607268_176bbe4b-7797-4d5d-8558-62057adab84e became leader
	I1212 01:08:10.576625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-607268_176bbe4b-7797-4d5d-8558-62057adab84e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-607268 -n embed-certs-607268
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-607268 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-glcnv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-607268 describe pod metrics-server-6867b74b74-glcnv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-607268 describe pod metrics-server-6867b74b74-glcnv: exit status 1 (77.846376ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-glcnv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-607268 describe pod metrics-server-6867b74b74-glcnv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 01:09:18.771395   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-12 01:17:34.416714212 +0000 UTC m=+6252.508381821
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-076578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-076578 logs -n 25: (2.037449384s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-000053 -- sudo                         | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-000053                                 | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:59:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:42.763823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:59:48.839858  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:51.911930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:57.991816  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:01.063931  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:07.143823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:10.215896  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:16.295837  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:19.367812  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:25.447920  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:28.519965  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:34.599875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:37.671930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:43.751927  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:46.823861  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:52.903942  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:55.975967  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:02.055889  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:05.127830  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:11.207862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:14.279940  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:20.359871  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:23.431885  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:29.511831  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:32.583875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:38.663880  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:41.735869  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:47.815810  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:50.887937  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:56.967886  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:00.039916  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:06.119870  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:09.191917  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:15.271841  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:18.343881  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:24.423844  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:27.495936  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:33.575851  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:36.647862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:39.652816  141469 start.go:364] duration metric: took 4m35.142362604s to acquireMachinesLock for "embed-certs-607268"
	I1212 01:02:39.652891  141469 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:39.652902  141469 fix.go:54] fixHost starting: 
	I1212 01:02:39.653292  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:39.653345  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:39.668953  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1212 01:02:39.669389  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:39.669880  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:02:39.669906  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:39.670267  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:39.670428  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:39.670550  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:02:39.671952  141469 fix.go:112] recreateIfNeeded on embed-certs-607268: state=Stopped err=<nil>
	I1212 01:02:39.671994  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	W1212 01:02:39.672154  141469 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:39.677119  141469 out.go:177] * Restarting existing kvm2 VM for "embed-certs-607268" ...
	I1212 01:02:39.650358  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:39.650413  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650700  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:02:39.650731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650949  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:02:39.652672  141411 machine.go:96] duration metric: took 4m37.426998938s to provisionDockerMachine
	I1212 01:02:39.652723  141411 fix.go:56] duration metric: took 4m37.447585389s for fixHost
	I1212 01:02:39.652731  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 4m37.447868317s
	W1212 01:02:39.652756  141411 start.go:714] error starting host: provision: host is not running
	W1212 01:02:39.652909  141411 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1212 01:02:39.652919  141411 start.go:729] Will try again in 5 seconds ...
	I1212 01:02:39.682230  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Start
	I1212 01:02:39.682424  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring networks are active...
	I1212 01:02:39.683293  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network default is active
	I1212 01:02:39.683713  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network mk-embed-certs-607268 is active
	I1212 01:02:39.684046  141469 main.go:141] libmachine: (embed-certs-607268) Getting domain xml...
	I1212 01:02:39.684631  141469 main.go:141] libmachine: (embed-certs-607268) Creating domain...
	I1212 01:02:40.886712  141469 main.go:141] libmachine: (embed-certs-607268) Waiting to get IP...
	I1212 01:02:40.887814  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:40.888208  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:40.888304  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:40.888203  142772 retry.go:31] will retry after 273.835058ms: waiting for machine to come up
	I1212 01:02:41.164102  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.164574  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.164604  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.164545  142772 retry.go:31] will retry after 260.789248ms: waiting for machine to come up
	I1212 01:02:41.427069  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.427486  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.427513  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.427449  142772 retry.go:31] will retry after 330.511025ms: waiting for machine to come up
	I1212 01:02:41.760038  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.760388  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.760434  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.760337  142772 retry.go:31] will retry after 564.656792ms: waiting for machine to come up
	I1212 01:02:42.327037  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.327545  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.327567  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.327505  142772 retry.go:31] will retry after 473.714754ms: waiting for machine to come up
	I1212 01:02:42.803228  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.803607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.803639  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.803548  142772 retry.go:31] will retry after 872.405168ms: waiting for machine to come up
	I1212 01:02:43.677522  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:43.677891  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:43.677919  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:43.677833  142772 retry.go:31] will retry after 1.092518369s: waiting for machine to come up
	I1212 01:02:44.655390  141411 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:02:44.771319  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:44.771721  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:44.771751  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:44.771666  142772 retry.go:31] will retry after 1.147907674s: waiting for machine to come up
	I1212 01:02:45.921165  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:45.921636  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:45.921666  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:45.921589  142772 retry.go:31] will retry after 1.69009103s: waiting for machine to come up
	I1212 01:02:47.614391  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:47.614838  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:47.614863  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:47.614792  142772 retry.go:31] will retry after 1.65610672s: waiting for machine to come up
	I1212 01:02:49.272864  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:49.273312  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:49.273337  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:49.273268  142772 retry.go:31] will retry after 2.50327667s: waiting for machine to come up
	I1212 01:02:51.779671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:51.780077  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:51.780104  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:51.780016  142772 retry.go:31] will retry after 2.808303717s: waiting for machine to come up
	I1212 01:02:54.591866  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:54.592241  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:54.592285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:54.592208  142772 retry.go:31] will retry after 3.689107313s: waiting for machine to come up
	I1212 01:02:58.282552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.282980  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has current primary IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.283005  141469 main.go:141] libmachine: (embed-certs-607268) Found IP for machine: 192.168.50.151
	I1212 01:02:58.283018  141469 main.go:141] libmachine: (embed-certs-607268) Reserving static IP address...
	I1212 01:02:58.283419  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.283441  141469 main.go:141] libmachine: (embed-certs-607268) Reserved static IP address: 192.168.50.151
	I1212 01:02:58.283453  141469 main.go:141] libmachine: (embed-certs-607268) DBG | skip adding static IP to network mk-embed-certs-607268 - found existing host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"}
	I1212 01:02:58.283462  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Getting to WaitForSSH function...
	I1212 01:02:58.283469  141469 main.go:141] libmachine: (embed-certs-607268) Waiting for SSH to be available...
	I1212 01:02:58.285792  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286126  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.286162  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286298  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH client type: external
	I1212 01:02:58.286330  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa (-rw-------)
	I1212 01:02:58.286378  141469 main.go:141] libmachine: (embed-certs-607268) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:02:58.286394  141469 main.go:141] libmachine: (embed-certs-607268) DBG | About to run SSH command:
	I1212 01:02:58.286403  141469 main.go:141] libmachine: (embed-certs-607268) DBG | exit 0
	I1212 01:02:58.407633  141469 main.go:141] libmachine: (embed-certs-607268) DBG | SSH cmd err, output: <nil>: 
	I1212 01:02:58.407985  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetConfigRaw
	I1212 01:02:58.408745  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.411287  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.411642  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411920  141469 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 01:02:58.412117  141469 machine.go:93] provisionDockerMachine start ...
	I1212 01:02:58.412136  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:58.412336  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.414338  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414643  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.414669  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414765  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.414944  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415114  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415259  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.415450  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.415712  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.415724  141469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:02:58.520032  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:02:58.520068  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520312  141469 buildroot.go:166] provisioning hostname "embed-certs-607268"
	I1212 01:02:58.520341  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520539  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.523169  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.523584  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523733  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.523910  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524092  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524252  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.524411  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.524573  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.524584  141469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-607268 && echo "embed-certs-607268" | sudo tee /etc/hostname
	I1212 01:02:58.642175  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-607268
	
	I1212 01:02:58.642232  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.645114  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645480  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.645505  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645698  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.645909  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646063  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646192  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.646334  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.646513  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.646530  141469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-607268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-607268/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-607268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:02:58.758655  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:58.758696  141469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:02:58.758715  141469 buildroot.go:174] setting up certificates
	I1212 01:02:58.758726  141469 provision.go:84] configureAuth start
	I1212 01:02:58.758735  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.759031  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.761749  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762024  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.762052  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762165  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.764356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.764699  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764781  141469 provision.go:143] copyHostCerts
	I1212 01:02:58.764874  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:02:58.764898  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:02:58.764986  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:02:58.765107  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:02:58.765118  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:02:58.765160  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:02:58.765235  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:02:58.765245  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:02:58.765296  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:02:58.765369  141469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-607268 san=[127.0.0.1 192.168.50.151 embed-certs-607268 localhost minikube]
	I1212 01:02:58.890412  141469 provision.go:177] copyRemoteCerts
	I1212 01:02:58.890519  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:02:58.890560  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.892973  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893262  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.893291  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893471  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.893647  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.893761  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.893855  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:58.973652  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:02:58.998097  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:02:59.022028  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:02:59.045859  141469 provision.go:87] duration metric: took 287.094036ms to configureAuth
	I1212 01:02:59.045892  141469 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:02:59.046119  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:02:59.046242  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.048869  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049255  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.049285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049465  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.049642  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049764  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049864  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.049974  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.050181  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.050198  141469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:02:59.276670  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:02:59.276708  141469 machine.go:96] duration metric: took 864.577145ms to provisionDockerMachine
	I1212 01:02:59.276724  141469 start.go:293] postStartSetup for "embed-certs-607268" (driver="kvm2")
	I1212 01:02:59.276747  141469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:02:59.276780  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.277171  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:02:59.277207  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.279974  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280341  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.280387  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280529  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.280738  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.280897  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.281026  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.363091  141469 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:02:59.367476  141469 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:02:59.367503  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:02:59.367618  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:02:59.367749  141469 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:02:59.367844  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:02:59.377895  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:02:59.402410  141469 start.go:296] duration metric: took 125.668908ms for postStartSetup
	I1212 01:02:59.402462  141469 fix.go:56] duration metric: took 19.749561015s for fixHost
	I1212 01:02:59.402485  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.405057  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.405385  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405624  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.405808  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.405974  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.406094  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.406237  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.406423  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.406436  141469 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:02:59.516697  141884 start.go:364] duration metric: took 3m42.810720852s to acquireMachinesLock for "default-k8s-diff-port-076578"
	I1212 01:02:59.516759  141884 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:59.516773  141884 fix.go:54] fixHost starting: 
	I1212 01:02:59.517192  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:59.517241  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:59.533969  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 01:02:59.534367  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:59.534831  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:02:59.534854  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:59.535165  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:59.535362  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:02:59.535499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:02:59.536930  141884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-076578: state=Stopped err=<nil>
	I1212 01:02:59.536951  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	W1212 01:02:59.537093  141884 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:59.538974  141884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-076578" ...
	I1212 01:02:59.516496  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965379.489556963
	
	I1212 01:02:59.516525  141469 fix.go:216] guest clock: 1733965379.489556963
	I1212 01:02:59.516535  141469 fix.go:229] Guest: 2024-12-12 01:02:59.489556963 +0000 UTC Remote: 2024-12-12 01:02:59.40246635 +0000 UTC m=+295.033602018 (delta=87.090613ms)
	I1212 01:02:59.516574  141469 fix.go:200] guest clock delta is within tolerance: 87.090613ms
	I1212 01:02:59.516580  141469 start.go:83] releasing machines lock for "embed-certs-607268", held for 19.863720249s
	I1212 01:02:59.516605  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.516828  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:59.519731  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520075  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.520111  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520202  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520752  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520921  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.521064  141469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:02:59.521131  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.521155  141469 ssh_runner.go:195] Run: cat /version.json
	I1212 01:02:59.521171  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.523724  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.523971  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524036  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524063  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524221  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524374  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524375  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524401  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524553  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.524562  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524719  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524719  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.524837  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.525000  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.628168  141469 ssh_runner.go:195] Run: systemctl --version
	I1212 01:02:59.635800  141469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:02:59.788137  141469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:02:59.795216  141469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:02:59.795289  141469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:02:59.811889  141469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:02:59.811917  141469 start.go:495] detecting cgroup driver to use...
	I1212 01:02:59.811992  141469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:02:59.827154  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:02:59.841138  141469 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:02:59.841193  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:02:59.854874  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:02:59.869250  141469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:02:59.994723  141469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:00.136385  141469 docker.go:233] disabling docker service ...
	I1212 01:03:00.136462  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:00.150966  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:00.163907  141469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:00.340171  141469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:00.480828  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:00.498056  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:00.518273  141469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:00.518339  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.529504  141469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:00.529571  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.540616  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.553419  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.566004  141469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:00.577682  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.589329  141469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.612561  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.625526  141469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:00.635229  141469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:00.635289  141469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:00.657569  141469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:00.669982  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:00.793307  141469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:00.887423  141469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:00.887498  141469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:00.892715  141469 start.go:563] Will wait 60s for crictl version
	I1212 01:03:00.892773  141469 ssh_runner.go:195] Run: which crictl
	I1212 01:03:00.896646  141469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:00.933507  141469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:00.933606  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:00.977011  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:01.008491  141469 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:02:59.540301  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Start
	I1212 01:02:59.540482  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring networks are active...
	I1212 01:02:59.541181  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network default is active
	I1212 01:02:59.541503  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network mk-default-k8s-diff-port-076578 is active
	I1212 01:02:59.541802  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Getting domain xml...
	I1212 01:02:59.542437  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Creating domain...
	I1212 01:03:00.796803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting to get IP...
	I1212 01:03:00.797932  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798386  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798495  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.798404  142917 retry.go:31] will retry after 199.022306ms: waiting for machine to come up
	I1212 01:03:00.999067  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999547  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999572  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.999499  142917 retry.go:31] will retry after 340.093067ms: waiting for machine to come up
	I1212 01:03:01.340839  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341513  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.341437  142917 retry.go:31] will retry after 469.781704ms: waiting for machine to come up
	I1212 01:03:01.009956  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:03:01.012767  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013224  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:03:01.013252  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013471  141469 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:01.017815  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:01.032520  141469 kubeadm.go:883] updating cluster {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:01.032662  141469 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:01.032715  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:01.070406  141469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:01.070478  141469 ssh_runner.go:195] Run: which lz4
	I1212 01:03:01.074840  141469 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:01.079207  141469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:01.079238  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:02.524822  141469 crio.go:462] duration metric: took 1.450020274s to copy over tarball
	I1212 01:03:02.524909  141469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:01.812803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813298  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813335  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.813232  142917 retry.go:31] will retry after 552.327376ms: waiting for machine to come up
	I1212 01:03:02.367682  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368152  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368187  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:02.368106  142917 retry.go:31] will retry after 629.731283ms: waiting for machine to come up
	I1212 01:03:02.999887  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000307  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000339  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.000235  142917 retry.go:31] will retry after 764.700679ms: waiting for machine to come up
	I1212 01:03:03.766389  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766891  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766919  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.766845  142917 retry.go:31] will retry after 920.806371ms: waiting for machine to come up
	I1212 01:03:04.689480  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690029  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:04.689996  142917 retry.go:31] will retry after 1.194297967s: waiting for machine to come up
	I1212 01:03:05.886345  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886729  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886796  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:05.886714  142917 retry.go:31] will retry after 1.60985804s: waiting for machine to come up
	I1212 01:03:04.719665  141469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194717299s)
	I1212 01:03:04.719708  141469 crio.go:469] duration metric: took 2.194851225s to extract the tarball
	I1212 01:03:04.719719  141469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:04.756600  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:04.802801  141469 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:04.802832  141469 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:04.802840  141469 kubeadm.go:934] updating node { 192.168.50.151 8443 v1.31.2 crio true true} ...
	I1212 01:03:04.802949  141469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-607268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:04.803008  141469 ssh_runner.go:195] Run: crio config
	I1212 01:03:04.854778  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:04.854804  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:04.854815  141469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:04.854836  141469 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.151 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-607268 NodeName:embed-certs-607268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:04.854962  141469 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-607268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:04.855023  141469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:04.864877  141469 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:04.864967  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:04.874503  141469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1212 01:03:04.891124  141469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:04.907560  141469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1212 01:03:04.924434  141469 ssh_runner.go:195] Run: grep 192.168.50.151	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:04.928518  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:04.940523  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:05.076750  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:05.094388  141469 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268 for IP: 192.168.50.151
	I1212 01:03:05.094424  141469 certs.go:194] generating shared ca certs ...
	I1212 01:03:05.094440  141469 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:05.094618  141469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:05.094691  141469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:05.094710  141469 certs.go:256] generating profile certs ...
	I1212 01:03:05.094833  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/client.key
	I1212 01:03:05.094916  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key.9253237b
	I1212 01:03:05.094968  141469 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key
	I1212 01:03:05.095131  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:05.095177  141469 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:05.095192  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:05.095224  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:05.095254  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:05.095293  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:05.095359  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:05.096133  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:05.130605  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:05.164694  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:05.206597  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:05.241305  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:03:05.270288  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:05.296137  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:05.320158  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:05.343820  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:05.373277  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:05.397070  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:05.420738  141469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:05.437822  141469 ssh_runner.go:195] Run: openssl version
	I1212 01:03:05.443744  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:05.454523  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459182  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459237  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.465098  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:05.475681  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:05.486396  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490883  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490929  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.496613  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:05.507295  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:05.517980  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522534  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522590  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.528117  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:05.538979  141469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:05.543723  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:05.549611  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:05.555445  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:05.561482  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:05.567221  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:05.573015  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:05.578902  141469 kubeadm.go:392] StartCluster: {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:05.578984  141469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:05.579042  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.619027  141469 cri.go:89] found id: ""
	I1212 01:03:05.619115  141469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:05.629472  141469 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:05.629501  141469 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:05.629567  141469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:05.639516  141469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:05.640491  141469 kubeconfig.go:125] found "embed-certs-607268" server: "https://192.168.50.151:8443"
	I1212 01:03:05.642468  141469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:05.651891  141469 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.151
	I1212 01:03:05.651922  141469 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:05.651934  141469 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:05.651978  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.686414  141469 cri.go:89] found id: ""
	I1212 01:03:05.686501  141469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:05.702724  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:05.712454  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:05.712480  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:05.712531  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:05.721529  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:05.721603  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:05.730897  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:05.739758  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:05.739815  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:05.749089  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.758042  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:05.758104  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.767425  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:05.776195  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:05.776260  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:05.785435  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:05.794795  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:05.918710  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:06.846975  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.072898  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.139677  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.237216  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:07.237336  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:07.738145  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.238219  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.255671  141469 api_server.go:72] duration metric: took 1.018455783s to wait for apiserver process to appear ...
	I1212 01:03:08.255705  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:08.255734  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:08.256408  141469 api_server.go:269] stopped: https://192.168.50.151:8443/healthz: Get "https://192.168.50.151:8443/healthz": dial tcp 192.168.50.151:8443: connect: connection refused
	I1212 01:03:08.756070  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:07.498527  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498942  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498966  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:07.498889  142917 retry.go:31] will retry after 2.278929136s: waiting for machine to come up
	I1212 01:03:09.779321  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779847  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779879  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:09.779793  142917 retry.go:31] will retry after 1.82028305s: waiting for machine to come up
	I1212 01:03:10.630080  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.630121  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.630140  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.674408  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.674470  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.756660  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.763043  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:10.763088  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.256254  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.263457  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.263481  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.756759  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.764019  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.764053  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:12.256627  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:12.262369  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:03:12.270119  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:12.270153  141469 api_server.go:131] duration metric: took 4.014438706s to wait for apiserver health ...
	I1212 01:03:12.270164  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:12.270172  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:12.272148  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:12.273667  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:12.289376  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:12.312620  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:12.323663  141469 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:12.323715  141469 system_pods.go:61] "coredns-7c65d6cfc9-n66x6" [ae2c1ac7-0c17-453d-a05c-70fbf6d81e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:12.323727  141469 system_pods.go:61] "etcd-embed-certs-607268" [811dc3d0-d893-45a0-a5c7-3fee0efd2e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:12.323747  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [11848f2c-215b-4cf4-88f0-93151c55e7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:12.323764  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [4f4066ab-b6e4-4a46-a03b-dda1776c39ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:12.323776  141469 system_pods.go:61] "kube-proxy-9f6lj" [2463030a-d7db-4031-9e26-0a56a9067520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:12.323784  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [c2aeaf4a-7fb8-4bb8-87ea-5401db017fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:12.323795  141469 system_pods.go:61] "metrics-server-6867b74b74-5bms9" [e1a794f9-cf60-486f-a0e8-670dc7dfb4da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:12.323803  141469 system_pods.go:61] "storage-provisioner" [b29860cd-465d-4e70-ad5d-dd17c22ae290] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:12.323820  141469 system_pods.go:74] duration metric: took 11.170811ms to wait for pod list to return data ...
	I1212 01:03:12.323845  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:12.327828  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:12.327863  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:12.327880  141469 node_conditions.go:105] duration metric: took 4.029256ms to run NodePressure ...
	I1212 01:03:12.327902  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:12.638709  141469 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644309  141469 kubeadm.go:739] kubelet initialised
	I1212 01:03:12.644332  141469 kubeadm.go:740] duration metric: took 5.590168ms waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644356  141469 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:12.650768  141469 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:11.601456  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602012  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602044  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:11.601956  142917 retry.go:31] will retry after 2.272258384s: waiting for machine to come up
	I1212 01:03:13.876607  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.876986  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.877024  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:13.876950  142917 retry.go:31] will retry after 4.014936005s: waiting for machine to come up
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:14.657097  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:16.658207  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:17.657933  141469 pod_ready.go:93] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:17.657957  141469 pod_ready.go:82] duration metric: took 5.007165494s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:17.657966  141469 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:17.896127  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896639  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Found IP for machine: 192.168.39.174
	I1212 01:03:17.896659  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserving static IP address...
	I1212 01:03:17.897028  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.897062  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserved static IP address: 192.168.39.174
	I1212 01:03:17.897087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | skip adding static IP to network mk-default-k8s-diff-port-076578 - found existing host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"}
	I1212 01:03:17.897108  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Getting to WaitForSSH function...
	I1212 01:03:17.897126  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for SSH to be available...
	I1212 01:03:17.899355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899727  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.899754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899911  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH client type: external
	I1212 01:03:17.899941  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa (-rw-------)
	I1212 01:03:17.899976  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:17.899989  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | About to run SSH command:
	I1212 01:03:17.900005  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | exit 0
	I1212 01:03:18.036261  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:18.036610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetConfigRaw
	I1212 01:03:18.037352  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.040173  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040570  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.040595  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040866  141884 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 01:03:18.041107  141884 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:18.041134  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.041355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.043609  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.043945  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.043973  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.044142  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.044291  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044466  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.044745  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.044986  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.045002  141884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:18.156161  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:18.156193  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156472  141884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-076578"
	I1212 01:03:18.156499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.159391  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.159871  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.159903  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.160048  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.160244  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160379  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160500  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.160681  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.160898  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.160917  141884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-076578 && echo "default-k8s-diff-port-076578" | sudo tee /etc/hostname
	I1212 01:03:18.285904  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-076578
	
	I1212 01:03:18.285937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.288620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.288987  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.289010  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.289285  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.289491  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289658  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289799  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.289981  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.290190  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.290223  141884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-076578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-076578/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-076578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:18.409683  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:18.409721  141884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:18.409751  141884 buildroot.go:174] setting up certificates
	I1212 01:03:18.409761  141884 provision.go:84] configureAuth start
	I1212 01:03:18.409782  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.410045  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.412393  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412721  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.412756  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.415204  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415502  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.415530  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415663  141884 provision.go:143] copyHostCerts
	I1212 01:03:18.415735  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:18.415757  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:18.415832  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:18.415925  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:18.415933  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:18.415952  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:18.416007  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:18.416015  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:18.416032  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:18.416081  141884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-076578 san=[127.0.0.1 192.168.39.174 default-k8s-diff-port-076578 localhost minikube]
	I1212 01:03:18.502493  141884 provision.go:177] copyRemoteCerts
	I1212 01:03:18.502562  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:18.502594  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.505104  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505377  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.505409  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505568  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.505754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.505892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.506034  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.590425  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:18.616850  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 01:03:18.640168  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:18.664517  141884 provision.go:87] duration metric: took 254.738256ms to configureAuth
	I1212 01:03:18.664542  141884 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:18.664705  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:03:18.664778  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.667425  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.667784  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.667808  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.668004  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.668178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668313  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668448  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.668587  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.668751  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.668772  141884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:18.906880  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:18.906908  141884 machine.go:96] duration metric: took 865.784426ms to provisionDockerMachine
	I1212 01:03:18.906920  141884 start.go:293] postStartSetup for "default-k8s-diff-port-076578" (driver="kvm2")
	I1212 01:03:18.906931  141884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:18.906949  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.907315  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:18.907348  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.909882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910213  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.910242  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910347  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.910542  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.910680  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.910806  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.994819  141884 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:18.998959  141884 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:18.998989  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:18.999069  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:18.999163  141884 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:18.999252  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:19.009226  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:19.032912  141884 start.go:296] duration metric: took 125.973128ms for postStartSetup
	I1212 01:03:19.032960  141884 fix.go:56] duration metric: took 19.516187722s for fixHost
	I1212 01:03:19.032990  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.035623  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.035947  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.035977  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.036151  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.036310  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036438  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036581  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.036738  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:19.036906  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:19.036919  141884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:19.148565  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965399.101726035
	
	I1212 01:03:19.148592  141884 fix.go:216] guest clock: 1733965399.101726035
	I1212 01:03:19.148602  141884 fix.go:229] Guest: 2024-12-12 01:03:19.101726035 +0000 UTC Remote: 2024-12-12 01:03:19.032967067 +0000 UTC m=+242.472137495 (delta=68.758968ms)
	I1212 01:03:19.148628  141884 fix.go:200] guest clock delta is within tolerance: 68.758968ms
	I1212 01:03:19.148635  141884 start.go:83] releasing machines lock for "default-k8s-diff-port-076578", held for 19.631903968s
	I1212 01:03:19.148688  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.149016  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:19.151497  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.151926  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.151954  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.152124  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152598  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152762  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152834  141884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:19.152892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.152952  141884 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:19.152972  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.155620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155694  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.155962  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156057  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.156114  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156123  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156316  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156327  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156469  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156583  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156619  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156826  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.156824  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.268001  141884 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:19.275696  141884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:19.426624  141884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:19.432842  141884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:19.432911  141884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:19.449082  141884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:19.449108  141884 start.go:495] detecting cgroup driver to use...
	I1212 01:03:19.449187  141884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:19.466543  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:19.482668  141884 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:19.482733  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:19.497124  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:19.512626  141884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:19.624948  141884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:19.779469  141884 docker.go:233] disabling docker service ...
	I1212 01:03:19.779545  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:19.794888  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:19.810497  141884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:19.954827  141884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:20.086435  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:20.100917  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:20.120623  141884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:20.120683  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.134353  141884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:20.134431  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.150373  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.165933  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.181524  141884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:20.196891  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.209752  141884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.228990  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.241553  141884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:20.251819  141884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:20.251883  141884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:20.267155  141884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:20.277683  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:20.427608  141884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:20.525699  141884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:20.525804  141884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:20.530984  141884 start.go:563] Will wait 60s for crictl version
	I1212 01:03:20.531055  141884 ssh_runner.go:195] Run: which crictl
	I1212 01:03:20.535013  141884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:20.576177  141884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:20.576251  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.605529  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.638175  141884 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:20.639475  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:20.642566  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643001  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:20.643034  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643196  141884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:20.647715  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:20.662215  141884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:20.662337  141884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:20.662381  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:20.705014  141884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:20.705112  141884 ssh_runner.go:195] Run: which lz4
	I1212 01:03:20.709477  141884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:20.714111  141884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:20.714145  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:19.666527  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:21.666676  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:24.165316  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:22.285157  141884 crio.go:462] duration metric: took 1.575709911s to copy over tarball
	I1212 01:03:22.285258  141884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:24.495814  141884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210502234s)
	I1212 01:03:24.495848  141884 crio.go:469] duration metric: took 2.210655432s to extract the tarball
	I1212 01:03:24.495857  141884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:24.533396  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:24.581392  141884 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:24.581419  141884 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:24.581428  141884 kubeadm.go:934] updating node { 192.168.39.174 8444 v1.31.2 crio true true} ...
	I1212 01:03:24.581524  141884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-076578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:24.581594  141884 ssh_runner.go:195] Run: crio config
	I1212 01:03:24.625042  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:24.625073  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:24.625083  141884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:24.625111  141884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-076578 NodeName:default-k8s-diff-port-076578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:24.625238  141884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-076578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:24.625313  141884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:24.635818  141884 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:24.635903  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:24.645966  141884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:03:24.665547  141884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:24.682639  141884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1212 01:03:24.700147  141884 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:24.704172  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:24.716697  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:24.842374  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:24.860641  141884 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578 for IP: 192.168.39.174
	I1212 01:03:24.860676  141884 certs.go:194] generating shared ca certs ...
	I1212 01:03:24.860700  141884 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:24.860888  141884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:24.860955  141884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:24.860970  141884 certs.go:256] generating profile certs ...
	I1212 01:03:24.861110  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.key
	I1212 01:03:24.861200  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key.4a68806a
	I1212 01:03:24.861251  141884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key
	I1212 01:03:24.861391  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:24.861444  141884 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:24.861458  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:24.861498  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:24.861535  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:24.861565  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:24.861629  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:24.862588  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:24.899764  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:24.950373  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:24.983222  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:25.017208  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 01:03:25.042653  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:25.071358  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:25.097200  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:25.122209  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:25.150544  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:25.181427  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:25.210857  141884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:25.229580  141884 ssh_runner.go:195] Run: openssl version
	I1212 01:03:25.236346  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:25.247510  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252355  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252407  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.258511  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:25.272698  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:25.289098  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295737  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295806  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.304133  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:25.315805  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:25.328327  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333482  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333539  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.339367  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:25.351612  141884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:25.357060  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:25.363452  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:25.369984  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:25.376434  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:25.382895  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:25.389199  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:25.395232  141884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:25.395325  141884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:25.395370  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.439669  141884 cri.go:89] found id: ""
	I1212 01:03:25.439749  141884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:25.453870  141884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:25.453893  141884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:25.453951  141884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:25.464552  141884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:25.465609  141884 kubeconfig.go:125] found "default-k8s-diff-port-076578" server: "https://192.168.39.174:8444"
	I1212 01:03:25.467767  141884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:25.477907  141884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I1212 01:03:25.477943  141884 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:25.477958  141884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:25.478018  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.521891  141884 cri.go:89] found id: ""
	I1212 01:03:25.521978  141884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:25.539029  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:25.549261  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:25.549283  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:25.549341  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:03:25.558948  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:25.559022  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:25.568947  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:03:25.579509  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:25.579614  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:25.589573  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.600434  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:25.600498  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.610337  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:03:25.619956  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:25.620014  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:25.631231  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:25.641366  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:25.761159  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:26.165525  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:28.168457  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.168492  141469 pod_ready.go:82] duration metric: took 10.510517291s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.168506  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175334  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.175361  141469 pod_ready.go:82] duration metric: took 6.84531ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175375  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183060  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.183093  141469 pod_ready.go:82] duration metric: took 7.709158ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183106  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.190999  141469 pod_ready.go:93] pod "kube-proxy-9f6lj" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.191028  141469 pod_ready.go:82] duration metric: took 7.913069ms for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.191040  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199945  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.199972  141469 pod_ready.go:82] duration metric: took 8.923682ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199984  141469 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:26.830271  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069070962s)
	I1212 01:03:26.830326  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.035935  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.113317  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.210226  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:27.210329  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:27.710504  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.211114  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.242967  141884 api_server.go:72] duration metric: took 1.032736901s to wait for apiserver process to appear ...
	I1212 01:03:28.243012  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:28.243038  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:28.243643  141884 api_server.go:269] stopped: https://192.168.39.174:8444/healthz: Get "https://192.168.39.174:8444/healthz": dial tcp 192.168.39.174:8444: connect: connection refused
	I1212 01:03:28.743921  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.546075  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.546113  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.546129  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.621583  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.621619  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.743860  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.750006  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:31.750052  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.243382  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.269990  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.270033  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.743516  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.752979  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.753012  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:33.243571  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:33.247902  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:03:33.253786  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:33.253810  141884 api_server.go:131] duration metric: took 5.010790107s to wait for apiserver health ...
	I1212 01:03:33.253820  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:33.253826  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:33.255762  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:30.208396  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:32.708024  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:33.257007  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:33.267934  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:33.286197  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:33.297934  141884 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:33.297982  141884 system_pods.go:61] "coredns-7c65d6cfc9-xn886" [db1f42f1-93d9-4942-813d-e3de1cc24801] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:33.297995  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [25555578-8169-4986-aa10-06a442152c50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:33.298006  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [1004c64c-91ca-43c3-9c3d-43dab13d3812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:33.298023  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [63d42313-4ea9-44f9-a8eb-b0c6c73424c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:33.298039  141884 system_pods.go:61] "kube-proxy-7frgh" [191ed421-4297-47c7-a46d-407a8eaa0378] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:33.298049  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [1506a505-697c-4b80-b7ef-55de1116fa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:33.298060  141884 system_pods.go:61] "metrics-server-6867b74b74-k9s7n" [806badc0-b609-421f-9203-3fd91212a145] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:33.298077  141884 system_pods.go:61] "storage-provisioner" [bc133673-b7e2-42b2-98ac-e3284c9162ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:33.298090  141884 system_pods.go:74] duration metric: took 11.875762ms to wait for pod list to return data ...
	I1212 01:03:33.298105  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:33.302482  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:33.302517  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:33.302532  141884 node_conditions.go:105] duration metric: took 4.418219ms to run NodePressure ...
	I1212 01:03:33.302555  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:33.728028  141884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735780  141884 kubeadm.go:739] kubelet initialised
	I1212 01:03:33.735810  141884 kubeadm.go:740] duration metric: took 7.738781ms waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735824  141884 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:33.743413  141884 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:35.750012  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.348909  141411 start.go:364] duration metric: took 54.693436928s to acquireMachinesLock for "no-preload-242725"
	I1212 01:03:39.348976  141411 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:39.348990  141411 fix.go:54] fixHost starting: 
	I1212 01:03:39.349442  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:39.349485  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:39.367203  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1212 01:03:39.367584  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:39.368158  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:03:39.368185  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:39.368540  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:39.368717  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:39.368854  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:03:39.370433  141411 fix.go:112] recreateIfNeeded on no-preload-242725: state=Stopped err=<nil>
	I1212 01:03:39.370460  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	W1212 01:03:39.370594  141411 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:39.372621  141411 out.go:177] * Restarting existing kvm2 VM for "no-preload-242725" ...
	I1212 01:03:35.206417  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.208384  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:37.750453  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.752224  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.374093  141411 main.go:141] libmachine: (no-preload-242725) Calling .Start
	I1212 01:03:39.374303  141411 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 01:03:39.375021  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 01:03:39.375456  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 01:03:39.375951  141411 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 01:03:39.376726  141411 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 01:03:40.703754  141411 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 01:03:40.705274  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.705752  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.705821  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.705709  143226 retry.go:31] will retry after 196.576482ms: waiting for machine to come up
	I1212 01:03:40.904341  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.904718  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.904740  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.904669  143226 retry.go:31] will retry after 375.936901ms: waiting for machine to come up
	I1212 01:03:41.282278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.282839  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.282871  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.282793  143226 retry.go:31] will retry after 427.731576ms: waiting for machine to come up
	I1212 01:03:41.712553  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.713198  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.713231  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.713084  143226 retry.go:31] will retry after 421.07445ms: waiting for machine to come up
	I1212 01:03:39.707174  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:41.711103  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.207685  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:41.768335  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.252708  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.754333  141884 pod_ready.go:93] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.754362  141884 pod_ready.go:82] duration metric: took 11.010925207s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.754371  141884 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760121  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.760142  141884 pod_ready.go:82] duration metric: took 5.764171ms for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760151  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765554  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.765575  141884 pod_ready.go:82] duration metric: took 5.417017ms for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765589  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:42.135878  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.136341  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.136367  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.136284  143226 retry.go:31] will retry after 477.81881ms: waiting for machine to come up
	I1212 01:03:42.616400  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.616906  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.616929  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.616858  143226 retry.go:31] will retry after 597.608319ms: waiting for machine to come up
	I1212 01:03:43.215837  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:43.216430  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:43.216454  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:43.216363  143226 retry.go:31] will retry after 1.118837214s: waiting for machine to come up
	I1212 01:03:44.336666  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:44.337229  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:44.337253  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:44.337187  143226 retry.go:31] will retry after 1.008232952s: waiting for machine to come up
	I1212 01:03:45.346868  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:45.347386  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:45.347423  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:45.347307  143226 retry.go:31] will retry after 1.735263207s: waiting for machine to come up
	I1212 01:03:47.084570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:47.084980  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:47.085012  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:47.084931  143226 retry.go:31] will retry after 1.662677797s: waiting for machine to come up
	I1212 01:03:46.208324  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.707694  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:48.069316  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.069362  141884 pod_ready.go:82] duration metric: took 3.303763458s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.069380  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328758  141884 pod_ready.go:93] pod "kube-proxy-7frgh" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.328784  141884 pod_ready.go:82] duration metric: took 259.396178ms for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328798  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337082  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.337106  141884 pod_ready.go:82] duration metric: took 8.298777ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337119  141884 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:50.343458  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.748914  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:48.749510  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:48.749535  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:48.749475  143226 retry.go:31] will retry after 2.670904101s: waiting for machine to come up
	I1212 01:03:51.421499  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:51.421915  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:51.421961  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:51.421862  143226 retry.go:31] will retry after 3.566697123s: waiting for machine to come up
	I1212 01:03:50.708435  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:53.207675  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.344098  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.344166  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:56.345835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.990515  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:54.990916  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:54.990941  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:54.990869  143226 retry.go:31] will retry after 4.288131363s: waiting for machine to come up
	I1212 01:03:55.706167  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:57.707796  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.843944  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.844210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:59.284312  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.284807  141411 main.go:141] libmachine: (no-preload-242725) Found IP for machine: 192.168.61.222
	I1212 01:03:59.284834  141411 main.go:141] libmachine: (no-preload-242725) Reserving static IP address...
	I1212 01:03:59.284851  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has current primary IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.285300  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.285334  141411 main.go:141] libmachine: (no-preload-242725) DBG | skip adding static IP to network mk-no-preload-242725 - found existing host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"}
	I1212 01:03:59.285357  141411 main.go:141] libmachine: (no-preload-242725) Reserved static IP address: 192.168.61.222
	I1212 01:03:59.285376  141411 main.go:141] libmachine: (no-preload-242725) Waiting for SSH to be available...
	I1212 01:03:59.285390  141411 main.go:141] libmachine: (no-preload-242725) DBG | Getting to WaitForSSH function...
	I1212 01:03:59.287532  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287840  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.287869  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287970  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH client type: external
	I1212 01:03:59.287998  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa (-rw-------)
	I1212 01:03:59.288043  141411 main.go:141] libmachine: (no-preload-242725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:59.288066  141411 main.go:141] libmachine: (no-preload-242725) DBG | About to run SSH command:
	I1212 01:03:59.288092  141411 main.go:141] libmachine: (no-preload-242725) DBG | exit 0
	I1212 01:03:59.415723  141411 main.go:141] libmachine: (no-preload-242725) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:59.416104  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 01:03:59.416755  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.419446  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.419848  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.419879  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.420182  141411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 01:03:59.420388  141411 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:59.420412  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:59.420637  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.422922  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423257  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.423278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423432  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.423626  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423787  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423918  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.424051  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.424222  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.424231  141411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:59.536768  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:59.536796  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537016  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:03:59.537042  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537234  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.539806  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540110  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.540141  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540337  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.540509  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540665  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540800  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.540973  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.541155  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.541171  141411 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-242725 && echo "no-preload-242725" | sudo tee /etc/hostname
	I1212 01:03:59.668244  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-242725
	
	I1212 01:03:59.668269  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.671021  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671353  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.671374  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671630  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.671851  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672000  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672160  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.672310  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.672485  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.672502  141411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-242725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-242725/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-242725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:59.792950  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:59.792985  141411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:59.793011  141411 buildroot.go:174] setting up certificates
	I1212 01:03:59.793024  141411 provision.go:84] configureAuth start
	I1212 01:03:59.793041  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.793366  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.796185  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796599  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.796638  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796783  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.799165  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799532  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.799558  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799711  141411 provision.go:143] copyHostCerts
	I1212 01:03:59.799780  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:59.799804  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:59.799869  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:59.800004  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:59.800015  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:59.800051  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:59.800144  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:59.800155  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:59.800182  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:59.800263  141411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.no-preload-242725 san=[127.0.0.1 192.168.61.222 localhost minikube no-preload-242725]
	I1212 01:03:59.987182  141411 provision.go:177] copyRemoteCerts
	I1212 01:03:59.987249  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:59.987290  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.989902  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990285  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.990317  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990520  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.990712  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.990856  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.990981  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.078289  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:00.103149  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:04:00.131107  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:04:00.159076  141411 provision.go:87] duration metric: took 366.034024ms to configureAuth
	I1212 01:04:00.159103  141411 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:04:00.159305  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:04:00.159401  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.162140  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162537  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.162570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162696  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.162864  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163016  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163124  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.163262  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.163436  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.163451  141411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:00.407729  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:00.407758  141411 machine.go:96] duration metric: took 987.35601ms to provisionDockerMachine
	I1212 01:04:00.407773  141411 start.go:293] postStartSetup for "no-preload-242725" (driver="kvm2")
	I1212 01:04:00.407787  141411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:00.407810  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.408186  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:00.408218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.410950  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411329  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.411360  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411585  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.411809  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.411981  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.412115  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.498221  141411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:00.502621  141411 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:04:00.502644  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:04:00.502705  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:04:00.502779  141411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:04:00.502863  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:00.512322  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:00.540201  141411 start.go:296] duration metric: took 132.410555ms for postStartSetup
	I1212 01:04:00.540250  141411 fix.go:56] duration metric: took 21.191260423s for fixHost
	I1212 01:04:00.540287  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.542631  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.542983  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.543011  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.543212  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.543393  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543556  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543702  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.543867  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.544081  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.544095  141411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:04:00.656532  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965440.609922961
	
	I1212 01:04:00.656560  141411 fix.go:216] guest clock: 1733965440.609922961
	I1212 01:04:00.656569  141411 fix.go:229] Guest: 2024-12-12 01:04:00.609922961 +0000 UTC Remote: 2024-12-12 01:04:00.540255801 +0000 UTC m=+358.475944555 (delta=69.66716ms)
	I1212 01:04:00.656597  141411 fix.go:200] guest clock delta is within tolerance: 69.66716ms
	I1212 01:04:00.656616  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 21.307670093s
	I1212 01:04:00.656644  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.656898  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:00.659345  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659694  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.659722  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659878  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660405  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660584  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660663  141411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:00.660731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.660751  141411 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:00.660771  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.663331  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663458  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663717  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663757  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663789  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663802  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663867  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664039  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664044  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664201  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664202  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664359  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664359  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.664490  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.777379  141411 ssh_runner.go:195] Run: systemctl --version
	I1212 01:04:00.783765  141411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:04:00.933842  141411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:04:00.941376  141411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:04:00.941441  141411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:04:00.958993  141411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:04:00.959021  141411 start.go:495] detecting cgroup driver to use...
	I1212 01:04:00.959084  141411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:04:00.977166  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:04:00.991166  141411 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:04:00.991231  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:04:01.004993  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:04:01.018654  141411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:04:01.136762  141411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:04:01.300915  141411 docker.go:233] disabling docker service ...
	I1212 01:04:01.301036  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:04:01.316124  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:04:01.329544  141411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:04:01.451034  141411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:04:01.583471  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:04:01.611914  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:04:01.632628  141411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:04:01.632706  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.644315  141411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:04:01.644384  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.656980  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.668295  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.679885  141411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:04:01.692032  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.703893  141411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.724486  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.737251  141411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:04:01.748955  141411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:04:01.749025  141411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:04:01.763688  141411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:04:01.773871  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:01.903690  141411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:04:02.006921  141411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:04:02.007013  141411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:04:02.013116  141411 start.go:563] Will wait 60s for crictl version
	I1212 01:04:02.013187  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.017116  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:04:02.061210  141411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:04:02.061304  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.093941  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.124110  141411 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:59.708028  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:01.709056  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:04.207527  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.845618  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.346194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:02.125647  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:02.128481  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.128914  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:02.128973  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.129205  141411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:04:02.133801  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:02.148892  141411 kubeadm.go:883] updating cluster {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:04:02.149001  141411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:04:02.149033  141411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:04:02.187762  141411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:04:02.187805  141411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:04:02.187934  141411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.187988  141411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.188025  141411 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.188070  141411 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.188118  141411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.188220  141411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.188332  141411 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 01:04:02.188501  141411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.189594  141411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.189674  141411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.189892  141411 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.190015  141411 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 01:04:02.190121  141411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.190152  141411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.190169  141411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.190746  141411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.372557  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.375185  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.389611  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.394581  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.396799  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.408346  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1212 01:04:02.413152  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.438165  141411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1212 01:04:02.438217  141411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.438272  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.518752  141411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1212 01:04:02.518804  141411 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.518856  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.556287  141411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1212 01:04:02.556329  141411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.556371  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569629  141411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1212 01:04:02.569671  141411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.569683  141411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1212 01:04:02.569721  141411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.569731  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569770  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667454  141411 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1212 01:04:02.667511  141411 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.667510  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.667532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.667549  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667632  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.667644  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.667671  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.683807  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.784024  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.797709  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.797836  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.797848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.797969  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.822411  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.880580  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.927305  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.928532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.928661  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.938172  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.973083  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:03.023699  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 01:04:03.023813  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.069822  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 01:04:03.069879  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 01:04:03.069920  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 01:04:03.069945  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:03.069973  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:03.069990  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:03.070037  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 01:04:03.070116  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:03.094188  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1212 01:04:03.094210  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094229  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1212 01:04:03.094249  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094285  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1212 01:04:03.094313  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1212 01:04:03.094379  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1212 01:04:03.094399  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 01:04:03.094480  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:04.469173  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.174822  141411 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.080313699s)
	I1212 01:04:05.174869  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1212 01:04:05.174899  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.08062641s)
	I1212 01:04:05.174928  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1212 01:04:05.174968  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.174994  141411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 01:04:05.175034  141411 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.175086  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:05.175038  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.179340  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:06.207626  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:08.706815  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.843908  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:07.654693  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.479543185s)
	I1212 01:04:07.654721  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1212 01:04:07.654743  141411 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.654775  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.475408038s)
	I1212 01:04:07.654848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:07.654784  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.699286  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:09.647620  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.948278157s)
	I1212 01:04:09.647642  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.992718083s)
	I1212 01:04:09.647662  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1212 01:04:09.647683  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:04:09.647686  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647734  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647776  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:09.652886  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 01:04:11.112349  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.464585062s)
	I1212 01:04:11.112384  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1212 01:04:11.112412  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.112462  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.206933  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.208623  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.844442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:14.845189  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.083753  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.971262547s)
	I1212 01:04:13.083788  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1212 01:04:13.083821  141411 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:13.083878  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:17.087777  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.003870257s)
	I1212 01:04:17.087818  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1212 01:04:17.087853  141411 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:17.087917  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:15.707981  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:18.207205  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.345467  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:19.845255  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:17.734979  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:04:17.735041  141411 cache_images.go:123] Successfully loaded all cached images
	I1212 01:04:17.735049  141411 cache_images.go:92] duration metric: took 15.547226992s to LoadCachedImages
	I1212 01:04:17.735066  141411 kubeadm.go:934] updating node { 192.168.61.222 8443 v1.31.2 crio true true} ...
	I1212 01:04:17.735209  141411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-242725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:04:17.735311  141411 ssh_runner.go:195] Run: crio config
	I1212 01:04:17.780826  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:17.780850  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:17.780859  141411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:04:17.780882  141411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.222 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-242725 NodeName:no-preload-242725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:04:17.781025  141411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-242725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:04:17.781091  141411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:04:17.792290  141411 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:04:17.792374  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:04:17.802686  141411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:04:17.819496  141411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:04:17.836164  141411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1212 01:04:17.855844  141411 ssh_runner.go:195] Run: grep 192.168.61.222	control-plane.minikube.internal$ /etc/hosts
	I1212 01:04:17.860034  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:17.874418  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:18.011357  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:04:18.028641  141411 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725 for IP: 192.168.61.222
	I1212 01:04:18.028666  141411 certs.go:194] generating shared ca certs ...
	I1212 01:04:18.028683  141411 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:04:18.028880  141411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:04:18.028940  141411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:04:18.028954  141411 certs.go:256] generating profile certs ...
	I1212 01:04:18.029088  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.key
	I1212 01:04:18.029164  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key.f2ca822e
	I1212 01:04:18.029235  141411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key
	I1212 01:04:18.029404  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:04:18.029438  141411 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:04:18.029449  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:04:18.029485  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:04:18.029517  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:04:18.029555  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:04:18.029621  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:18.030313  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:04:18.082776  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:04:18.116012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:04:18.147385  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:04:18.180861  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:04:18.225067  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:04:18.255999  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:04:18.280193  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:04:18.304830  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:04:18.329012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:04:18.355462  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:04:18.379991  141411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:04:18.397637  141411 ssh_runner.go:195] Run: openssl version
	I1212 01:04:18.403727  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:04:18.415261  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419809  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419885  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.425687  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:04:18.438938  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:04:18.452150  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457050  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457116  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.463151  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:04:18.476193  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:04:18.489034  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493916  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493969  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.500285  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:04:18.513016  141411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:04:18.517996  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:04:18.524465  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:04:18.530607  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:04:18.536857  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:04:18.542734  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:04:18.548786  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:04:18.554771  141411 kubeadm.go:392] StartCluster: {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:04:18.554897  141411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:04:18.554950  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.593038  141411 cri.go:89] found id: ""
	I1212 01:04:18.593131  141411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:04:18.604527  141411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:04:18.604550  141411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:04:18.604605  141411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:04:18.614764  141411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:04:18.616082  141411 kubeconfig.go:125] found "no-preload-242725" server: "https://192.168.61.222:8443"
	I1212 01:04:18.618611  141411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:04:18.628709  141411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.222
	I1212 01:04:18.628741  141411 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:04:18.628753  141411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:04:18.628814  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.673970  141411 cri.go:89] found id: ""
	I1212 01:04:18.674067  141411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:04:18.692603  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:04:18.704916  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:04:18.704940  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:04:18.704999  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:04:18.714952  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:04:18.715015  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:04:18.724982  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:04:18.734756  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:04:18.734817  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:04:18.744528  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.753898  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:04:18.753955  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.763929  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:04:18.773108  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:04:18.773153  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:04:18.782710  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:04:18.792750  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:18.902446  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.056638  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154145942s)
	I1212 01:04:20.056677  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.275475  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.348697  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.483317  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:04:20.483487  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.983704  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.484485  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.526353  141411 api_server.go:72] duration metric: took 1.043031812s to wait for apiserver process to appear ...
	I1212 01:04:21.526389  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:04:21.526415  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:20.207458  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:22.212936  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.362548  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.362574  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.362586  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.380904  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.380939  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.527174  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.533112  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:24.533146  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.026678  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.031368  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.031409  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.526576  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.532260  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.532297  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:26.026741  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:26.031841  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:04:26.038198  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:04:26.038228  141411 api_server.go:131] duration metric: took 4.511829936s to wait for apiserver health ...
	I1212 01:04:26.038240  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:26.038249  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:26.040150  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:04:22.343994  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:24.344818  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.346428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.041669  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:04:26.055010  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:04:26.076860  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:04:26.092122  141411 system_pods.go:59] 8 kube-system pods found
	I1212 01:04:26.092154  141411 system_pods.go:61] "coredns-7c65d6cfc9-7w9dc" [878bfb78-fae5-4e05-b0ae-362841eace85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:04:26.092163  141411 system_pods.go:61] "etcd-no-preload-242725" [ed97c029-7933-4f4e-ab6c-f514b963ce21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:04:26.092170  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [df66d12b-b847-4ef3-b610-5679ff50e8c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:04:26.092175  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [eb5bc914-4267-41e8-9b37-26b7d3da9f68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:04:26.092180  141411 system_pods.go:61] "kube-proxy-rjwps" [fccefb3e-a282-4f0e-9070-11cc95bca868] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:04:26.092185  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [139de4ad-468c-4f1b-becf-3708bcaa7c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:04:26.092190  141411 system_pods.go:61] "metrics-server-6867b74b74-xzkbn" [16e0364c-18f9-43c2-9394-bc8548ce9caa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:04:26.092194  141411 system_pods.go:61] "storage-provisioner" [06c3232e-011a-4aff-b3ca-81858355bef4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:04:26.092200  141411 system_pods.go:74] duration metric: took 15.315757ms to wait for pod list to return data ...
	I1212 01:04:26.092208  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:04:26.095691  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:04:26.095715  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:04:26.095725  141411 node_conditions.go:105] duration metric: took 3.513466ms to run NodePressure ...
	I1212 01:04:26.095742  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:26.389652  141411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398484  141411 kubeadm.go:739] kubelet initialised
	I1212 01:04:26.398513  141411 kubeadm.go:740] duration metric: took 8.824036ms waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398524  141411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:04:26.406667  141411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.416093  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416137  141411 pod_ready.go:82] duration metric: took 9.418311ms for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.416151  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416165  141411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.422922  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422951  141411 pod_ready.go:82] duration metric: took 6.774244ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.422962  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422971  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.429822  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429854  141411 pod_ready.go:82] duration metric: took 6.874602ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.429866  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429875  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.483542  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483578  141411 pod_ready.go:82] duration metric: took 53.690915ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.483609  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483622  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:24.707572  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:27.207073  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.843868  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.844684  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:28.081872  141411 pod_ready.go:93] pod "kube-proxy-rjwps" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:28.081901  141411 pod_ready.go:82] duration metric: took 1.598267411s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:28.081921  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:30.088965  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:32.099574  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:29.706557  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:31.706767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:33.706983  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.344074  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.345401  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:34.588690  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:34.588715  141411 pod_ready.go:82] duration metric: took 6.50678624s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:34.588727  141411 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:36.596475  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:36.207357  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:38.207516  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.844602  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.845115  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.095215  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:41.594487  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.708001  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:42.708477  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.344150  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.844336  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:43.595231  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:46.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.708857  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:47.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.207408  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.844848  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.344941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:48.595492  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.095830  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.706544  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.843857  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:54.345207  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.095934  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.598377  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.706720  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.707883  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:56.843562  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.843654  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:01.343428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.095640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.596162  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.207281  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:02.706000  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:03.344025  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.844236  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:03.095197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.096865  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:04.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.208202  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:08.343968  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:10.844912  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.595940  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.596206  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.094527  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.707469  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.206124  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.206285  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:13.343045  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.343706  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.096430  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.097196  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.207697  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:17.344863  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:19.348835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.595465  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:20.708087  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:22.708298  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:21.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:24.344442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:26.344638  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:23.095266  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.096246  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.097041  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.208225  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.706184  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:28.850925  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.344959  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.595032  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.595989  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.706696  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:32.206728  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.206974  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:33.844343  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.343847  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.096631  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.594963  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.707213  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:39.207473  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:38.842756  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.845163  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:38.595482  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.595779  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:41.707367  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:44.206693  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:43.343827  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.346793  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:43.095993  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.096437  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.206828  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:48.708067  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.843200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:49.844564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:47.595931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:50.095312  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.096092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:51.206349  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:53.206853  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.344518  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.344667  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.594738  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:56.595036  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:57.207827  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.845550  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.344473  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:58.595556  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:00.595723  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.706748  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.709131  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:01.842981  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.843341  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.843811  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:02.597066  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:04.597793  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:07.095874  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:06.206955  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:08.208809  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:07.844408  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.343200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:09.595982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:12.094513  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.708379  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:13.207302  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:12.343941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.843333  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.095296  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:16.596411  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:15.706990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.707150  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.344804  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.347642  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.096235  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.594712  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.707303  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.707482  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:24.208937  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:21.842975  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.845498  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.343574  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.596224  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.094625  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.707610  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:29.208425  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:28.843986  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.343502  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:28.096043  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.594875  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.707976  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:34.208061  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:33.843106  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.344148  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:33.095175  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.095369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:37.095797  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.707296  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.207875  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:38.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.343589  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.594883  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.594919  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.707035  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.707860  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:43.346470  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.845269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.595640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.596624  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.708204  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:48.206947  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:48.344831  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.345010  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:47.596708  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.094517  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:52.094931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.706903  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:53.207824  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:52.843114  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.845370  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.095381  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:56.097745  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:55.207916  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.708907  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.344917  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:59.844269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:58.595415  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:01.095794  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.208708  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:02.707173  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:02.342899  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:04.343072  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:03.596239  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.094842  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:05.206813  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:07.209422  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 01:07:08.842564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.843885  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:08.095189  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.096580  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:09.707537  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.205862  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.207175  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:12.844629  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:15.344452  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.594761  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.595360  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:17.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.707535  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.208767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:17.851516  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:20.343210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.596696  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.095982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:21.706245  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:23.707282  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:22.344482  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.844735  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.594999  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:26.595500  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:25.707950  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.200781  141469 pod_ready.go:82] duration metric: took 4m0.000776844s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:28.200837  141469 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:28.200866  141469 pod_ready.go:39] duration metric: took 4m15.556500045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:28.200916  141469 kubeadm.go:597] duration metric: took 4m22.571399912s to restartPrimaryControlPlane
	W1212 01:07:28.201043  141469 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:28.201086  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:27.343233  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:29.344194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.596410  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.096062  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.843547  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.843911  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:36.343719  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.595369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:35.600094  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:38.844539  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.844997  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:38.096180  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:43.343026  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.343118  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:42.595856  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.096338  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.343485  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:48.337317  141884 pod_ready.go:82] duration metric: took 4m0.000178627s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:48.337358  141884 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:48.337386  141884 pod_ready.go:39] duration metric: took 4m14.601527023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:48.337421  141884 kubeadm.go:597] duration metric: took 4m22.883520304s to restartPrimaryControlPlane
	W1212 01:07:48.337486  141884 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:48.337526  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:47.595092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:50.096774  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.514069  141469 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312952103s)
	I1212 01:07:54.514153  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:54.543613  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:54.555514  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:54.569001  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:54.569024  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:54.569082  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:54.583472  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:54.583553  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:54.598721  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:54.614369  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:54.614451  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:54.625630  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.643317  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:54.643398  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.652870  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:54.662703  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:54.662774  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:54.672601  141469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:54.722949  141469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:07:54.723064  141469 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:54.845332  141469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:54.845476  141469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:54.845623  141469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:07:54.855468  141469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:54.857558  141469 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:54.857689  141469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:54.857774  141469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:54.857890  141469 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:54.857960  141469 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:54.858038  141469 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:54.858109  141469 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:54.858214  141469 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:54.858296  141469 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:54.858396  141469 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:54.858503  141469 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:54.858557  141469 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:54.858643  141469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:55.129859  141469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:55.274235  141469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:07:55.401999  141469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:56.015091  141469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:56.123268  141469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:56.123820  141469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:56.126469  141469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:52.595027  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:57.096606  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:56.128185  141469 out.go:235]   - Booting up control plane ...
	I1212 01:07:56.128343  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:56.128478  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:56.128577  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:56.149476  141469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:56.156042  141469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:56.156129  141469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:56.292423  141469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:07:56.292567  141469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:07:56.794594  141469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.027526ms
	I1212 01:07:56.794711  141469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:07:59.594764  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:01.595663  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:02.299386  141469 kubeadm.go:310] [api-check] The API server is healthy after 5.503154599s
	I1212 01:08:02.311549  141469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:02.326944  141469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:02.354402  141469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:02.354661  141469 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-607268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:02.368168  141469 kubeadm.go:310] [bootstrap-token] Using token: 0eo07f.wy46ulxfywwd0uy8
	I1212 01:08:02.369433  141469 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:02.369569  141469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:02.381945  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:02.407880  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:02.419211  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:02.426470  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:02.437339  141469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:02.708518  141469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:03.143189  141469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:03.704395  141469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:03.705460  141469 kubeadm.go:310] 
	I1212 01:08:03.705557  141469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:03.705576  141469 kubeadm.go:310] 
	I1212 01:08:03.705646  141469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:03.705650  141469 kubeadm.go:310] 
	I1212 01:08:03.705672  141469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:03.705724  141469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:03.705768  141469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:03.705800  141469 kubeadm.go:310] 
	I1212 01:08:03.705906  141469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:03.705918  141469 kubeadm.go:310] 
	I1212 01:08:03.705976  141469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:03.705987  141469 kubeadm.go:310] 
	I1212 01:08:03.706073  141469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:03.706191  141469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:03.706286  141469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:03.706307  141469 kubeadm.go:310] 
	I1212 01:08:03.706438  141469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:03.706549  141469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:03.706556  141469 kubeadm.go:310] 
	I1212 01:08:03.706670  141469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.706833  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:03.706864  141469 kubeadm.go:310] 	--control-plane 
	I1212 01:08:03.706869  141469 kubeadm.go:310] 
	I1212 01:08:03.706951  141469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:03.706963  141469 kubeadm.go:310] 
	I1212 01:08:03.707035  141469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.707134  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:03.708092  141469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:03.708135  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:08:03.708146  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:03.709765  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:03.711315  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:03.724767  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:03.749770  141469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:03.749830  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:03.749896  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-607268 minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=embed-certs-607268 minikube.k8s.io/primary=true
	I1212 01:08:03.973050  141469 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:03.973436  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.094838  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:06.095216  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:04.473952  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.974222  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.473799  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.974261  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.473492  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.974288  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.474064  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.974218  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:08.081567  141469 kubeadm.go:1113] duration metric: took 4.331794716s to wait for elevateKubeSystemPrivileges
	I1212 01:08:08.081603  141469 kubeadm.go:394] duration metric: took 5m2.502707851s to StartCluster
	I1212 01:08:08.081629  141469 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.081722  141469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:08.083443  141469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.083783  141469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:08.083894  141469 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:08.084015  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:08.084027  141469 addons.go:69] Setting metrics-server=true in profile "embed-certs-607268"
	I1212 01:08:08.084045  141469 addons.go:234] Setting addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:08.084014  141469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-607268"
	I1212 01:08:08.084054  141469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-607268"
	I1212 01:08:08.084083  141469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-607268"
	I1212 01:08:08.084085  141469 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-607268"
	W1212 01:08:08.084130  141469 addons.go:243] addon storage-provisioner should already be in state true
	W1212 01:08:08.084057  141469 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084618  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084658  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084671  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084684  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084617  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084756  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.085205  141469 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:08.086529  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:08.104090  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I1212 01:08:08.104115  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I1212 01:08:08.104092  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1212 01:08:08.104662  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104701  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104785  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105323  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105329  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105337  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105382  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105696  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105718  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105700  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.106132  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106163  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.106364  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.106599  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106626  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.110390  141469 addons.go:234] Setting addon default-storageclass=true in "embed-certs-607268"
	W1212 01:08:08.110415  141469 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:08.110447  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.110811  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.110844  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.124380  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1212 01:08:08.124888  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.125447  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.125472  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.125764  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.125966  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.126885  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1212 01:08:08.127417  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.127718  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1212 01:08:08.127911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.127990  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128002  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.128161  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.128338  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.128541  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.128612  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128626  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.129037  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.129640  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.129678  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.129905  141469 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:08.131337  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:08.131367  141469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:08.131387  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.131816  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.133335  141469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:08.134372  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.134696  141469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.134714  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:08.134734  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.134851  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.134868  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.135026  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.135247  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.135405  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.135549  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.137253  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137705  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.137725  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137810  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.137911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.138065  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.138162  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.146888  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I1212 01:08:08.147344  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.147919  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.147937  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.148241  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.148418  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.150018  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.150282  141469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.150299  141469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:08.150318  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.152881  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153311  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.153327  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.153344  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153509  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.153634  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.153816  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.301991  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:08.323794  141469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338205  141469 node_ready.go:49] node "embed-certs-607268" has status "Ready":"True"
	I1212 01:08:08.338241  141469 node_ready.go:38] duration metric: took 14.401624ms for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338255  141469 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:08.355801  141469 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:08.406624  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:08.406648  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:08.409497  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.456893  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:08.456917  141469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:08.554996  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.558767  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.558793  141469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:08.614574  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.702483  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702513  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.702818  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.702883  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.702894  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.702904  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702912  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.703142  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.703186  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.703163  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.714426  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.714450  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.714840  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.714857  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.821732  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266688284s)
	I1212 01:08:09.821807  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.821824  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822160  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822185  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.822211  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.822225  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822487  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.822518  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822535  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842157  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.227536232s)
	I1212 01:08:09.842222  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842237  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.842627  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.842663  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.842672  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842679  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842687  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.843002  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.843013  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.843028  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.843046  141469 addons.go:475] Verifying addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:09.844532  141469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:08.098516  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:10.596197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:09.845721  141469 addons.go:510] duration metric: took 1.761839241s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:10.400164  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:12.862616  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:14.362448  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.362473  141469 pod_ready.go:82] duration metric: took 6.006632075s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.362486  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868198  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.868220  141469 pod_ready.go:82] duration metric: took 505.72656ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868231  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872557  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.872582  141469 pod_ready.go:82] duration metric: took 4.343797ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872599  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876837  141469 pod_ready.go:93] pod "kube-proxy-6hw4b" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.876858  141469 pod_ready.go:82] duration metric: took 4.251529ms for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876867  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881467  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.881487  141469 pod_ready.go:82] duration metric: took 4.612567ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881496  141469 pod_ready.go:39] duration metric: took 6.543228562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:14.881516  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:14.881571  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.898899  141469 api_server.go:72] duration metric: took 6.815070313s to wait for apiserver process to appear ...
	I1212 01:08:14.898942  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:14.898963  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:08:14.904555  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:08:14.905738  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:14.905762  141469 api_server.go:131] duration metric: took 6.812513ms to wait for apiserver health ...
	I1212 01:08:14.905771  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:14.964381  141469 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:14.964413  141469 system_pods.go:61] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:14.964418  141469 system_pods.go:61] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:14.964422  141469 system_pods.go:61] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:14.964426  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:14.964429  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:14.964432  141469 system_pods.go:61] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:14.964435  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:14.964441  141469 system_pods.go:61] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:14.964447  141469 system_pods.go:61] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:14.964460  141469 system_pods.go:74] duration metric: took 58.68072ms to wait for pod list to return data ...
	I1212 01:08:14.964476  141469 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:15.161106  141469 default_sa.go:45] found service account: "default"
	I1212 01:08:15.161137  141469 default_sa.go:55] duration metric: took 196.651344ms for default service account to be created ...
	I1212 01:08:15.161147  141469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:15.363429  141469 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:15.363457  141469 system_pods.go:89] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:15.363462  141469 system_pods.go:89] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:15.363466  141469 system_pods.go:89] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:15.363470  141469 system_pods.go:89] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:15.363473  141469 system_pods.go:89] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:15.363477  141469 system_pods.go:89] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:15.363480  141469 system_pods.go:89] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:15.363487  141469 system_pods.go:89] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:15.363492  141469 system_pods.go:89] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:15.363501  141469 system_pods.go:126] duration metric: took 202.347796ms to wait for k8s-apps to be running ...
	I1212 01:08:15.363508  141469 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:15.363553  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:15.378498  141469 system_svc.go:56] duration metric: took 14.977368ms WaitForService to wait for kubelet
	I1212 01:08:15.378527  141469 kubeadm.go:582] duration metric: took 7.294704666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:15.378545  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:15.561384  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:15.561408  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:15.561422  141469 node_conditions.go:105] duration metric: took 182.869791ms to run NodePressure ...
	I1212 01:08:15.561435  141469 start.go:241] waiting for startup goroutines ...
	I1212 01:08:15.561442  141469 start.go:246] waiting for cluster config update ...
	I1212 01:08:15.561453  141469 start.go:255] writing updated cluster config ...
	I1212 01:08:15.561693  141469 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:15.615106  141469 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:15.617073  141469 out.go:177] * Done! kubectl is now configured to use "embed-certs-607268" cluster and "default" namespace by default
	I1212 01:08:14.771660  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.434092304s)
	I1212 01:08:14.771750  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:14.802721  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:08:14.813349  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:08:14.826608  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:08:14.826637  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:08:14.826693  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:08:14.842985  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:08:14.843060  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:08:14.855326  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:08:14.872371  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:08:14.872449  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:08:14.883793  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.894245  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:08:14.894306  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.906163  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:08:14.915821  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:08:14.915867  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:08:14.926019  141884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:08:15.092424  141884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:13.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:15.096259  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:17.596953  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:20.095957  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:22.096970  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:23.562216  141884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:08:23.562302  141884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:08:23.562463  141884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:08:23.562655  141884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:08:23.562786  141884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:08:23.562870  141884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:08:23.564412  141884 out.go:235]   - Generating certificates and keys ...
	I1212 01:08:23.564519  141884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:08:23.564605  141884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:08:23.564718  141884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:08:23.564802  141884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:08:23.564879  141884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:08:23.564925  141884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:08:23.565011  141884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:08:23.565110  141884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:08:23.565230  141884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:08:23.565352  141884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:08:23.565393  141884 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:08:23.565439  141884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:08:23.565485  141884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:08:23.565537  141884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:08:23.565582  141884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:08:23.565636  141884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:08:23.565700  141884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:08:23.565786  141884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:08:23.565885  141884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:08:23.567104  141884 out.go:235]   - Booting up control plane ...
	I1212 01:08:23.567195  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:08:23.567267  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:08:23.567353  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:08:23.567472  141884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:08:23.567579  141884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:08:23.567662  141884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:08:23.567812  141884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:08:23.567953  141884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:08:23.568010  141884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001996966s
	I1212 01:08:23.568071  141884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:08:23.568125  141884 kubeadm.go:310] [api-check] The API server is healthy after 5.001946459s
	I1212 01:08:23.568266  141884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:23.568424  141884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:23.568510  141884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:23.568702  141884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-076578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:23.568789  141884 kubeadm.go:310] [bootstrap-token] Using token: 472xql.x3zqihc9l5oj308m
	I1212 01:08:23.570095  141884 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:23.570226  141884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:23.570353  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:23.570550  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:23.570719  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:23.570880  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:23.571006  141884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:23.571186  141884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:23.571245  141884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:23.571322  141884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:23.571333  141884 kubeadm.go:310] 
	I1212 01:08:23.571411  141884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:23.571421  141884 kubeadm.go:310] 
	I1212 01:08:23.571530  141884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:23.571551  141884 kubeadm.go:310] 
	I1212 01:08:23.571609  141884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:23.571711  141884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:23.571795  141884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:23.571808  141884 kubeadm.go:310] 
	I1212 01:08:23.571892  141884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:23.571907  141884 kubeadm.go:310] 
	I1212 01:08:23.571985  141884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:23.571992  141884 kubeadm.go:310] 
	I1212 01:08:23.572069  141884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:23.572184  141884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:23.572276  141884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:23.572286  141884 kubeadm.go:310] 
	I1212 01:08:23.572413  141884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:23.572516  141884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:23.572525  141884 kubeadm.go:310] 
	I1212 01:08:23.572656  141884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.572805  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:23.572847  141884 kubeadm.go:310] 	--control-plane 
	I1212 01:08:23.572856  141884 kubeadm.go:310] 
	I1212 01:08:23.572973  141884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:23.572991  141884 kubeadm.go:310] 
	I1212 01:08:23.573107  141884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.573248  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:23.573273  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:08:23.573283  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:23.574736  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:23.575866  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:23.590133  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:23.613644  141884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:23.613737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:23.613759  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-076578 minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=default-k8s-diff-port-076578 minikube.k8s.io/primary=true
	I1212 01:08:23.642646  141884 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:23.831478  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.331749  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.832158  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.331630  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.831737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:26.331787  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.597126  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:27.095607  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:26.831860  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.331748  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.448891  141884 kubeadm.go:1113] duration metric: took 3.835231667s to wait for elevateKubeSystemPrivileges
	I1212 01:08:27.448930  141884 kubeadm.go:394] duration metric: took 5m2.053707834s to StartCluster
	I1212 01:08:27.448957  141884 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.449060  141884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:27.450918  141884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.451183  141884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:27.451263  141884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:27.451385  141884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451409  141884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451417  141884 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:08:27.451413  141884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451449  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:27.451454  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451465  141884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-076578"
	I1212 01:08:27.451423  141884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451570  141884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451586  141884 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:27.451648  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451876  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451905  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451927  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.451942  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452055  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.452096  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452939  141884 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:27.454521  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:27.467512  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1212 01:08:27.467541  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1212 01:08:27.467581  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1212 01:08:27.468032  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468069  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468039  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468580  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468592  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468604  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468609  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468620  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468635  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468968  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.469191  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.469562  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469579  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469613  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.469623  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.472898  141884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.472925  141884 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:27.472956  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.473340  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.473389  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.485014  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I1212 01:08:27.485438  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.486058  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.486077  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.486629  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.486832  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.487060  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1212 01:08:27.487779  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.488503  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.488527  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.488910  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.489132  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.489304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.489892  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1212 01:08:27.490599  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.490758  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.491213  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.491236  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.491385  141884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:27.491606  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.492230  141884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:27.492375  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.492420  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.493368  141884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.493382  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:27.493397  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.493462  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:27.493468  141884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:27.493481  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.496807  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497273  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.497304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497474  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.497647  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.497691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497771  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.497922  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.498178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.498190  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.498288  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.498467  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.498634  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.498779  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.512025  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1212 01:08:27.512490  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.513168  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.513187  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.513474  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.513664  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.514930  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.515106  141884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.515119  141884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:27.515131  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.520051  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520084  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.520183  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520419  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.520574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.520737  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.520828  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.692448  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:27.712214  141884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724269  141884 node_ready.go:49] node "default-k8s-diff-port-076578" has status "Ready":"True"
	I1212 01:08:27.724301  141884 node_ready.go:38] duration metric: took 12.044784ms for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724313  141884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:27.729135  141884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:27.768566  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:27.768596  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:27.782958  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.797167  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:27.797190  141884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:27.828960  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:27.828983  141884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:27.871251  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.883614  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:28.198044  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198090  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198457  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198510  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198522  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.198532  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198544  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198817  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198815  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198844  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.277379  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.277405  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.277719  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.277741  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955418  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084128053s)
	I1212 01:08:28.955472  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955561  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071904294s)
	I1212 01:08:28.955624  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955646  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955856  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.955874  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955881  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955888  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.957731  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957740  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957748  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957761  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957802  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957814  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957823  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.957836  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.958072  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.958090  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.958100  141884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-076578"
	I1212 01:08:28.959879  141884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:28.961027  141884 addons.go:510] duration metric: took 1.509771178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:29.241061  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:29.241090  141884 pod_ready.go:82] duration metric: took 1.511925292s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:29.241106  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:31.247610  141884 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:29.095906  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:31.593942  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:33.246910  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.246933  141884 pod_ready.go:82] duration metric: took 4.005818542s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.246944  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753325  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.753350  141884 pod_ready.go:82] duration metric: took 506.39921ms for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753360  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758733  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.758759  141884 pod_ready.go:82] duration metric: took 5.391762ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758769  141884 pod_ready.go:39] duration metric: took 6.034446537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:33.758789  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:33.758854  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:33.774952  141884 api_server.go:72] duration metric: took 6.323732468s to wait for apiserver process to appear ...
	I1212 01:08:33.774976  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:33.774995  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:08:33.780463  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:08:33.781364  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:33.781387  141884 api_server.go:131] duration metric: took 6.404187ms to wait for apiserver health ...
	I1212 01:08:33.781396  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:33.786570  141884 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:33.786591  141884 system_pods.go:61] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.786596  141884 system_pods.go:61] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.786599  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.786603  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.786606  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.786610  141884 system_pods.go:61] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.786615  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.786623  141884 system_pods.go:61] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.786630  141884 system_pods.go:61] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.786643  141884 system_pods.go:74] duration metric: took 5.239236ms to wait for pod list to return data ...
	I1212 01:08:33.786655  141884 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:33.789776  141884 default_sa.go:45] found service account: "default"
	I1212 01:08:33.789794  141884 default_sa.go:55] duration metric: took 3.13371ms for default service account to be created ...
	I1212 01:08:33.789801  141884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:33.794118  141884 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:33.794139  141884 system_pods.go:89] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.794145  141884 system_pods.go:89] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.794149  141884 system_pods.go:89] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.794154  141884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.794157  141884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.794161  141884 system_pods.go:89] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.794165  141884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.794170  141884 system_pods.go:89] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.794177  141884 system_pods.go:89] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.794185  141884 system_pods.go:126] duration metric: took 4.378791ms to wait for k8s-apps to be running ...
	I1212 01:08:33.794194  141884 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:33.794233  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:33.809257  141884 system_svc.go:56] duration metric: took 15.051528ms WaitForService to wait for kubelet
	I1212 01:08:33.809290  141884 kubeadm.go:582] duration metric: took 6.358073584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:33.809323  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:33.813154  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:33.813174  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:33.813183  141884 node_conditions.go:105] duration metric: took 3.85493ms to run NodePressure ...
	I1212 01:08:33.813194  141884 start.go:241] waiting for startup goroutines ...
	I1212 01:08:33.813200  141884 start.go:246] waiting for cluster config update ...
	I1212 01:08:33.813210  141884 start.go:255] writing updated cluster config ...
	I1212 01:08:33.813474  141884 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:33.862511  141884 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:33.864367  141884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-076578" cluster and "default" namespace by default
	I1212 01:08:33.594621  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:34.589133  141411 pod_ready.go:82] duration metric: took 4m0.000384717s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	E1212 01:08:34.589166  141411 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:08:34.589184  141411 pod_ready.go:39] duration metric: took 4m8.190648334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:34.589214  141411 kubeadm.go:597] duration metric: took 4m15.984656847s to restartPrimaryControlPlane
	W1212 01:08:34.589299  141411 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:08:34.589327  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:00.919650  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.330292422s)
	I1212 01:09:00.919762  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:00.956649  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:09:00.976311  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:00.999339  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:00.999364  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:00.999413  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:01.013048  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:01.013112  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:01.027407  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:01.036801  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:01.036854  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:01.046865  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.056325  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:01.056390  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.066574  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:01.078080  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:01.078130  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:01.088810  141411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:01.249481  141411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:09.318633  141411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:09:09.318694  141411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:09:09.318789  141411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:09:09.318924  141411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:09:09.319074  141411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:09:09.319185  141411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:09:09.320615  141411 out.go:235]   - Generating certificates and keys ...
	I1212 01:09:09.320710  141411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:09:09.320803  141411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:09:09.320886  141411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:09:09.320957  141411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:09:09.321061  141411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:09:09.321118  141411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:09:09.321188  141411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:09:09.321249  141411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:09:09.321334  141411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:09:09.321442  141411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:09:09.321516  141411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:09:09.321611  141411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:09:09.321698  141411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:09:09.321775  141411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:09:09.321849  141411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:09:09.321924  141411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:09:09.321973  141411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:09:09.322099  141411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:09:09.322204  141411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:09:09.323661  141411 out.go:235]   - Booting up control plane ...
	I1212 01:09:09.323780  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:09:09.323864  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:09:09.323950  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:09:09.324082  141411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:09:09.324181  141411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:09:09.324255  141411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:09:09.324431  141411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:09:09.324571  141411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:09:09.324647  141411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.39943ms
	I1212 01:09:09.324730  141411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:09:09.324780  141411 kubeadm.go:310] [api-check] The API server is healthy after 5.001520724s
	I1212 01:09:09.324876  141411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:09:09.325036  141411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:09:09.325136  141411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:09:09.325337  141411 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-242725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:09:09.325401  141411 kubeadm.go:310] [bootstrap-token] Using token: k8uf20.0v0t2d7mhtmwxurz
	I1212 01:09:09.326715  141411 out.go:235]   - Configuring RBAC rules ...
	I1212 01:09:09.326840  141411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:09:09.326938  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:09:09.327149  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:09:09.327329  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:09:09.327498  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:09:09.327643  141411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:09:09.327787  141411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:09:09.327852  141411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:09:09.327926  141411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:09:09.327935  141411 kubeadm.go:310] 
	I1212 01:09:09.328027  141411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:09:09.328036  141411 kubeadm.go:310] 
	I1212 01:09:09.328138  141411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:09:09.328148  141411 kubeadm.go:310] 
	I1212 01:09:09.328183  141411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:09:09.328253  141411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:09:09.328302  141411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:09:09.328308  141411 kubeadm.go:310] 
	I1212 01:09:09.328396  141411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:09:09.328413  141411 kubeadm.go:310] 
	I1212 01:09:09.328478  141411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:09:09.328489  141411 kubeadm.go:310] 
	I1212 01:09:09.328554  141411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:09:09.328643  141411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:09:09.328719  141411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:09:09.328727  141411 kubeadm.go:310] 
	I1212 01:09:09.328797  141411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:09:09.328885  141411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:09:09.328894  141411 kubeadm.go:310] 
	I1212 01:09:09.328997  141411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329096  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:09:09.329120  141411 kubeadm.go:310] 	--control-plane 
	I1212 01:09:09.329126  141411 kubeadm.go:310] 
	I1212 01:09:09.329201  141411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:09:09.329209  141411 kubeadm.go:310] 
	I1212 01:09:09.329276  141411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329374  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:09:09.329386  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:09:09.329393  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:09:09.330870  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:09:09.332191  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:09:09.345593  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:09:09.366177  141411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:09:09.366234  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:09.366252  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-242725 minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=no-preload-242725 minikube.k8s.io/primary=true
	I1212 01:09:09.589709  141411 ops.go:34] apiserver oom_adj: -16
	I1212 01:09:09.589889  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.090703  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.590697  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.090698  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.590027  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.090413  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.590626  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.090322  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.590174  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.090032  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.233581  141411 kubeadm.go:1113] duration metric: took 4.867404479s to wait for elevateKubeSystemPrivileges
	I1212 01:09:14.233636  141411 kubeadm.go:394] duration metric: took 4m55.678870659s to StartCluster
	I1212 01:09:14.233674  141411 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.233790  141411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:09:14.236087  141411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.236385  141411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:09:14.236460  141411 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:09:14.236567  141411 addons.go:69] Setting storage-provisioner=true in profile "no-preload-242725"
	I1212 01:09:14.236583  141411 addons.go:69] Setting default-storageclass=true in profile "no-preload-242725"
	I1212 01:09:14.236610  141411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-242725"
	I1212 01:09:14.236611  141411 addons.go:69] Setting metrics-server=true in profile "no-preload-242725"
	I1212 01:09:14.236631  141411 addons.go:234] Setting addon metrics-server=true in "no-preload-242725"
	W1212 01:09:14.236646  141411 addons.go:243] addon metrics-server should already be in state true
	I1212 01:09:14.236682  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.236588  141411 addons.go:234] Setting addon storage-provisioner=true in "no-preload-242725"
	I1212 01:09:14.236687  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1212 01:09:14.236712  141411 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:09:14.236838  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.237093  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237141  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237185  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237101  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237227  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237235  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237863  141411 out.go:177] * Verifying Kubernetes components...
	I1212 01:09:14.239284  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:09:14.254182  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1212 01:09:14.254405  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I1212 01:09:14.254418  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 01:09:14.254742  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254857  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254874  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255388  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255415  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255439  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255803  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255814  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255807  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.256218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.256360  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256396  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.256524  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256567  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.259313  141411 addons.go:234] Setting addon default-storageclass=true in "no-preload-242725"
	W1212 01:09:14.259330  141411 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:09:14.259357  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.259575  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.259621  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.273148  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1212 01:09:14.273601  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.273909  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1212 01:09:14.274174  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274200  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274282  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.274560  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.274785  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274801  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274866  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.275126  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.275280  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.276840  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.277013  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.278945  141411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:09:14.279016  141411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.280219  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:09:14.280239  141411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:09:14.280268  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.280440  141411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.280450  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:09:14.280464  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.281368  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I1212 01:09:14.282054  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.282652  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.282673  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.283314  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.283947  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.283990  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.284230  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284232  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284802  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.284830  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285052  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285088  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.285106  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285247  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285458  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285483  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285619  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285624  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.285761  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285880  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.323872  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I1212 01:09:14.324336  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.324884  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.324906  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.325248  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.325437  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.326991  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.327217  141411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.327237  141411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:09:14.327258  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.330291  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.330895  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.330910  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.330926  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.331062  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.331219  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.331343  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.411182  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:09:14.454298  141411 node_ready.go:35] waiting up to 6m0s for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467328  141411 node_ready.go:49] node "no-preload-242725" has status "Ready":"True"
	I1212 01:09:14.467349  141411 node_ready.go:38] duration metric: took 13.017274ms for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467359  141411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:14.482865  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:14.557685  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.594366  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.602730  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:09:14.602760  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:09:14.666446  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:09:14.666474  141411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:09:14.746040  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.746075  141411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:09:14.799479  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.862653  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.862688  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863687  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.863706  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.863721  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.863730  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863740  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:14.863988  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.864007  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878604  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.878630  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.878903  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.878944  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878914  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.914665  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320255607s)
	I1212 01:09:15.914726  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.914741  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915158  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.915204  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915219  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:15.915236  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.915249  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915499  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915528  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.106582  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.307047373s)
	I1212 01:09:16.106635  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.106652  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107000  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107020  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107030  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.107037  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107298  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107317  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107328  141411 addons.go:475] Verifying addon metrics-server=true in "no-preload-242725"
	I1212 01:09:16.107305  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:16.108981  141411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:09:16.110608  141411 addons.go:510] duration metric: took 1.874161814s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:09:16.498983  141411 pod_ready.go:103] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:09:16.989762  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:16.989784  141411 pod_ready.go:82] duration metric: took 2.506893862s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:16.989795  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996560  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:17.996582  141411 pod_ready.go:82] duration metric: took 1.00678165s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996593  141411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002275  141411 pod_ready.go:93] pod "etcd-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.002294  141411 pod_ready.go:82] duration metric: took 5.694407ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002308  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006942  141411 pod_ready.go:93] pod "kube-apiserver-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.006965  141411 pod_ready.go:82] duration metric: took 4.650802ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006978  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011581  141411 pod_ready.go:93] pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.011621  141411 pod_ready.go:82] duration metric: took 4.634646ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011634  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187112  141411 pod_ready.go:93] pod "kube-proxy-5kc2s" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.187143  141411 pod_ready.go:82] duration metric: took 175.498685ms for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187156  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.586974  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.587003  141411 pod_ready.go:82] duration metric: took 399.836187ms for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.587012  141411 pod_ready.go:39] duration metric: took 4.119642837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:18.587032  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:09:18.587091  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:18.603406  141411 api_server.go:72] duration metric: took 4.366985373s to wait for apiserver process to appear ...
	I1212 01:09:18.603446  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:09:18.603473  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:09:18.609003  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:09:18.609950  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:09:18.609968  141411 api_server.go:131] duration metric: took 6.513408ms to wait for apiserver health ...
	I1212 01:09:18.609976  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:09:18.790460  141411 system_pods.go:59] 9 kube-system pods found
	I1212 01:09:18.790494  141411 system_pods.go:61] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:18.790502  141411 system_pods.go:61] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:18.790507  141411 system_pods.go:61] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:18.790510  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:18.790515  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:18.790520  141411 system_pods.go:61] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:18.790525  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:18.790534  141411 system_pods.go:61] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:18.790540  141411 system_pods.go:61] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:18.790556  141411 system_pods.go:74] duration metric: took 180.570066ms to wait for pod list to return data ...
	I1212 01:09:18.790566  141411 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:09:18.987130  141411 default_sa.go:45] found service account: "default"
	I1212 01:09:18.987172  141411 default_sa.go:55] duration metric: took 196.594497ms for default service account to be created ...
	I1212 01:09:18.987185  141411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:09:19.189233  141411 system_pods.go:86] 9 kube-system pods found
	I1212 01:09:19.189262  141411 system_pods.go:89] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:19.189267  141411 system_pods.go:89] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:19.189271  141411 system_pods.go:89] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:19.189274  141411 system_pods.go:89] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:19.189290  141411 system_pods.go:89] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:19.189294  141411 system_pods.go:89] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:19.189300  141411 system_pods.go:89] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:19.189308  141411 system_pods.go:89] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:19.189318  141411 system_pods.go:89] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:19.189331  141411 system_pods.go:126] duration metric: took 202.137957ms to wait for k8s-apps to be running ...
	I1212 01:09:19.189341  141411 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:09:19.189391  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:19.204241  141411 system_svc.go:56] duration metric: took 14.889522ms WaitForService to wait for kubelet
	I1212 01:09:19.204272  141411 kubeadm.go:582] duration metric: took 4.967858935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:09:19.204289  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:09:19.387735  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:09:19.387760  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:09:19.387768  141411 node_conditions.go:105] duration metric: took 183.47486ms to run NodePressure ...
	I1212 01:09:19.387780  141411 start.go:241] waiting for startup goroutines ...
	I1212 01:09:19.387787  141411 start.go:246] waiting for cluster config update ...
	I1212 01:09:19.387796  141411 start.go:255] writing updated cluster config ...
	I1212 01:09:19.388041  141411 ssh_runner.go:195] Run: rm -f paused
	I1212 01:09:19.437923  141411 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:09:19.439913  141411 out.go:177] * Done! kubectl is now configured to use "no-preload-242725" cluster and "default" namespace by default
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 
	
	
	==> CRI-O <==
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.873774460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966255873744061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=feadebe0-8645-4022-acc6-132c63fd0025 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.876821550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe727a6f-7e4c-4d27-b097-cf756f85ae04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.876888497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe727a6f-7e4c-4d27-b097-cf756f85ae04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.877129705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe727a6f-7e4c-4d27-b097-cf756f85ae04 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.919659874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c6ad1d24-5415-40a7-8696-48bdaaa90cc6 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.919757434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c6ad1d24-5415-40a7-8696-48bdaaa90cc6 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.921072650Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b613a6d1-797b-4a49-a4b3-b63df6b10020 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.921469965Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966255921442064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b613a6d1-797b-4a49-a4b3-b63df6b10020 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.922017826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfad6853-c7f6-45ec-a8df-5e4f08248907 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.922116164Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfad6853-c7f6-45ec-a8df-5e4f08248907 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.922413238Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfad6853-c7f6-45ec-a8df-5e4f08248907 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.963399006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5de54ad3-b8da-4a68-9ca5-e09cbd1e80fd name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.963493584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5de54ad3-b8da-4a68-9ca5-e09cbd1e80fd name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.965299485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b28303b7-b057-47c3-a579-af511e643e35 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.965747058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966255965724419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b28303b7-b057-47c3-a579-af511e643e35 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.966204550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab720813-d47f-49b6-bd13-a0f350ec54c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.966271234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab720813-d47f-49b6-bd13-a0f350ec54c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:35 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:35.966473441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab720813-d47f-49b6-bd13-a0f350ec54c3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.000720562Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2859466a-dee7-45eb-88f7-f559cbf0ce37 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.000813640Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2859466a-dee7-45eb-88f7-f559cbf0ce37 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.002255673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=154dacce-2d60-4914-87b5-5367ed4502f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.002720549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966256002695059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=154dacce-2d60-4914-87b5-5367ed4502f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.003460575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0ce67f5-04ec-4a21-815d-48158dff0231 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.003533103Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0ce67f5-04ec-4a21-815d-48158dff0231 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:17:36 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:17:36.003791134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0ce67f5-04ec-4a21-815d-48158dff0231 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f05fdc2ca6db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   e4798bc9a1216       storage-provisioner
	6e99deb43ee24       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   f91d6be142c43       coredns-7c65d6cfc9-9plj4
	d8f7f6160124c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   1bdfcd11dd3d4       coredns-7c65d6cfc9-v6j4v
	0f169e7b4faa3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   9dba36f674d90       kube-proxy-gd2mq
	a098ed9ecb9bc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   14d2abd6ec116       kube-controller-manager-default-k8s-diff-port-076578
	24aff75b31aff       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   4f3d6154fbb9c       kube-apiserver-default-k8s-diff-port-076578
	a00c8fc8fb2fe       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   991f61d8b6e80       kube-scheduler-default-k8s-diff-port-076578
	e04e272572be4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   437f62643121b       etcd-default-k8s-diff-port-076578
	c058c57f9ad2b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   3b64060c9a03c       kube-apiserver-default-k8s-diff-port-076578
	
	
	==> coredns [6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-076578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-076578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=default-k8s-diff-port-076578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 01:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-076578
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 01:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 01:13:39 +0000   Thu, 12 Dec 2024 01:08:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 01:13:39 +0000   Thu, 12 Dec 2024 01:08:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 01:13:39 +0000   Thu, 12 Dec 2024 01:08:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 01:13:39 +0000   Thu, 12 Dec 2024 01:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    default-k8s-diff-port-076578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 69353d120eeb468b849268b0c7842c67
	  System UUID:                69353d12-0eeb-468b-8492-68b0c7842c67
	  Boot ID:                    5ca6dcf2-3db9-4538-97c0-226455ab2231
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9plj4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 coredns-7c65d6cfc9-v6j4v                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 etcd-default-k8s-diff-port-076578                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-default-k8s-diff-port-076578             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-076578    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-gd2mq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-scheduler-default-k8s-diff-port-076578             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-dkmwp                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m8s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m14s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s  kubelet          Node default-k8s-diff-port-076578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s  kubelet          Node default-k8s-diff-port-076578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s  kubelet          Node default-k8s-diff-port-076578 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m10s  node-controller  Node default-k8s-diff-port-076578 event: Registered Node default-k8s-diff-port-076578 in Controller
	
	
	==> dmesg <==
	[  +0.052764] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049439] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.091247] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.773392] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.658714] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.655946] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.063145] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069831] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.181018] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.149050] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.331077] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.423348] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.062640] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.115929] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +5.587633] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.656463] kauditd_printk_skb: 85 callbacks suppressed
	[Dec12 01:08] systemd-fstab-generator[2593]: Ignoring "noauto" option for root device
	[  +0.076525] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.506074] systemd-fstab-generator[2910]: Ignoring "noauto" option for root device
	[  +0.079835] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.877576] systemd-fstab-generator[3025]: Ignoring "noauto" option for root device
	[  +0.827709] kauditd_printk_skb: 34 callbacks suppressed
	[Dec12 01:09] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce] <==
	{"level":"info","ts":"2024-12-12T01:08:17.886392Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-12T01:08:17.887314Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-12-12T01:08:17.889889Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-12-12T01:08:17.891306Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-12T01:08:17.891233Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-12T01:08:18.720627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-12T01:08:18.720733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-12T01:08:18.720779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 1"}
	{"level":"info","ts":"2024-12-12T01:08:18.720808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 2"}
	{"level":"info","ts":"2024-12-12T01:08:18.720832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-12-12T01:08:18.720859Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 2"}
	{"level":"info","ts":"2024-12-12T01:08:18.720884Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-12-12T01:08:18.723820Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.724885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:default-k8s-diff-port-076578 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-12T01:08:18.724931Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:08:18.726106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.726195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.726235Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.726197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:08:18.727014Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:08:18.727081Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:08:18.729510Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-12T01:08:18.729623Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-12-12T01:08:18.727369Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T01:08:18.733262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:17:36 up 14 min,  0 users,  load average: 0.20, 0.18, 0.16
	Linux default-k8s-diff-port-076578 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1212 01:13:21.206842       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:13:21.206894       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1212 01:13:21.208076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:13:21.208150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:14:21.209291       1 handler_proxy.go:99] no RequestInfo found in the context
	W1212 01:14:21.209311       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:14:21.209635       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1212 01:14:21.209640       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1212 01:14:21.210912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:14:21.210966       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:16:21.212087       1 handler_proxy.go:99] no RequestInfo found in the context
	W1212 01:16:21.212087       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:16:21.212749       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1212 01:16:21.212939       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:16:21.214080       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:16:21.214203       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc] <==
	W1212 01:08:13.630199       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.656217       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.722864       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.753254       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.783139       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.925261       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.014789       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.022222       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.064303       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.064516       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.085376       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.195926       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.284050       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.308173       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.351349       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.359796       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.363095       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.373830       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.451828       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.474711       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.517698       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.616224       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.679886       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.682363       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.692877       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8] <==
	E1212 01:12:27.227197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:12:27.655790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:12:57.233855       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:12:57.665488       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:13:27.240404       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:13:27.675443       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:13:39.652219       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-076578"
	E1212 01:13:57.247525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:13:57.683920       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:14:13.944084       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="242.966µs"
	E1212 01:14:27.254615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:14:27.691017       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:14:27.940196       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="109.7µs"
	E1212 01:14:57.261889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:14:57.697878       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:15:27.268486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:15:27.705863       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:15:57.275075       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:15:57.713111       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:16:27.282799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:16:27.720672       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:16:57.289467       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:16:57.728634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:17:27.296457       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:17:27.736464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1212 01:08:29.370504       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1212 01:08:29.446289       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.174"]
	E1212 01:08:29.446416       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:08:29.667045       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 01:08:29.667148       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:08:29.667333       1 server_linux.go:169] "Using iptables Proxier"
	I1212 01:08:29.683791       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:08:29.684064       1 server.go:483] "Version info" version="v1.31.2"
	I1212 01:08:29.684075       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:08:29.689427       1 config.go:199] "Starting service config controller"
	I1212 01:08:29.689440       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1212 01:08:29.689470       1 config.go:105] "Starting endpoint slice config controller"
	I1212 01:08:29.689475       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1212 01:08:29.690876       1 config.go:328] "Starting node config controller"
	I1212 01:08:29.690897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1212 01:08:29.789653       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1212 01:08:29.789717       1 shared_informer.go:320] Caches are synced for service config
	I1212 01:08:29.791077       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2] <==
	W1212 01:08:20.221947       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 01:08:20.222350       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1212 01:08:20.221230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 01:08:20.222417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:20.223040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 01:08:20.223156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:20.227238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:20.227346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.060012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:21.060045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.085732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:08:21.085852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.087674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:21.087770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.239269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 01:08:21.239670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.371938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:21.371989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.388896       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 01:08:21.388963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.477745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 01:08:21.477796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.561472       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 01:08:21.561525       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1212 01:08:24.518529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 01:16:23 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:23.055445    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966183055079268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:32 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:32.926801    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:16:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:33.058969    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966193058647134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:33.058991    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966193058647134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:43 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:43.060183    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966203059841462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:43 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:43.060214    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966203059841462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:45 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:45.928007    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:16:53 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:53.062363    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966213061949872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:53 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:53.062443    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966213061949872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:16:57 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:16:57.928264    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:17:03 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:03.068924    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966223068472566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:03 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:03.068978    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966223068472566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:12 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:12.927936    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:17:13 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:13.070707    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966233070107152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:13 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:13.070811    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966233070107152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:22 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:22.974368    2918 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 01:17:22 default-k8s-diff-port-076578 kubelet[2918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 01:17:22 default-k8s-diff-port-076578 kubelet[2918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 01:17:22 default-k8s-diff-port-076578 kubelet[2918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 01:17:22 default-k8s-diff-port-076578 kubelet[2918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 01:17:23 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:23.073898    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966243072964759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:23 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:23.073921    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966243072964759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:26 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:26.927001    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:17:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:33.075767    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966253075326934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:17:33.075864    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966253075326934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b] <==
	I1212 01:08:29.773258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 01:08:29.784207       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 01:08:29.784354       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 01:08:29.793105       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 01:08:29.793457       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-076578_c0950335-78bd-463b-800d-f691339a8e72!
	I1212 01:08:29.794440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd604350-6e37-45e9-9147-b066bd31081c", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-076578_c0950335-78bd-463b-800d-f691339a8e72 became leader
	I1212 01:08:29.893810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-076578_c0950335-78bd-463b-800d-f691339a8e72!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-dkmwp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 describe pod metrics-server-6867b74b74-dkmwp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-076578 describe pod metrics-server-6867b74b74-dkmwp: exit status 1 (61.906271ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-dkmwp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-076578 describe pod metrics-server-6867b74b74-dkmwp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1212 01:10:49.698663   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-242725 -n no-preload-242725
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-12 01:18:19.982843041 +0000 UTC m=+6298.074510641
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-242725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-242725 logs -n 25: (2.117451005s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-000053 -- sudo                         | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-000053                                 | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:59:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:42.763823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:59:48.839858  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:51.911930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:57.991816  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:01.063931  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:07.143823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:10.215896  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:16.295837  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:19.367812  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:25.447920  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:28.519965  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:34.599875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:37.671930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:43.751927  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:46.823861  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:52.903942  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:55.975967  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:02.055889  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:05.127830  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:11.207862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:14.279940  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:20.359871  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:23.431885  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:29.511831  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:32.583875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:38.663880  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:41.735869  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:47.815810  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:50.887937  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:56.967886  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:00.039916  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:06.119870  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:09.191917  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:15.271841  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:18.343881  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:24.423844  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:27.495936  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:33.575851  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:36.647862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:39.652816  141469 start.go:364] duration metric: took 4m35.142362604s to acquireMachinesLock for "embed-certs-607268"
	I1212 01:02:39.652891  141469 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:39.652902  141469 fix.go:54] fixHost starting: 
	I1212 01:02:39.653292  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:39.653345  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:39.668953  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1212 01:02:39.669389  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:39.669880  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:02:39.669906  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:39.670267  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:39.670428  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:39.670550  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:02:39.671952  141469 fix.go:112] recreateIfNeeded on embed-certs-607268: state=Stopped err=<nil>
	I1212 01:02:39.671994  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	W1212 01:02:39.672154  141469 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:39.677119  141469 out.go:177] * Restarting existing kvm2 VM for "embed-certs-607268" ...
	I1212 01:02:39.650358  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:39.650413  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650700  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:02:39.650731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650949  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:02:39.652672  141411 machine.go:96] duration metric: took 4m37.426998938s to provisionDockerMachine
	I1212 01:02:39.652723  141411 fix.go:56] duration metric: took 4m37.447585389s for fixHost
	I1212 01:02:39.652731  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 4m37.447868317s
	W1212 01:02:39.652756  141411 start.go:714] error starting host: provision: host is not running
	W1212 01:02:39.652909  141411 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1212 01:02:39.652919  141411 start.go:729] Will try again in 5 seconds ...
	I1212 01:02:39.682230  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Start
	I1212 01:02:39.682424  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring networks are active...
	I1212 01:02:39.683293  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network default is active
	I1212 01:02:39.683713  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network mk-embed-certs-607268 is active
	I1212 01:02:39.684046  141469 main.go:141] libmachine: (embed-certs-607268) Getting domain xml...
	I1212 01:02:39.684631  141469 main.go:141] libmachine: (embed-certs-607268) Creating domain...
	I1212 01:02:40.886712  141469 main.go:141] libmachine: (embed-certs-607268) Waiting to get IP...
	I1212 01:02:40.887814  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:40.888208  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:40.888304  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:40.888203  142772 retry.go:31] will retry after 273.835058ms: waiting for machine to come up
	I1212 01:02:41.164102  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.164574  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.164604  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.164545  142772 retry.go:31] will retry after 260.789248ms: waiting for machine to come up
	I1212 01:02:41.427069  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.427486  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.427513  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.427449  142772 retry.go:31] will retry after 330.511025ms: waiting for machine to come up
	I1212 01:02:41.760038  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.760388  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.760434  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.760337  142772 retry.go:31] will retry after 564.656792ms: waiting for machine to come up
	I1212 01:02:42.327037  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.327545  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.327567  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.327505  142772 retry.go:31] will retry after 473.714754ms: waiting for machine to come up
	I1212 01:02:42.803228  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.803607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.803639  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.803548  142772 retry.go:31] will retry after 872.405168ms: waiting for machine to come up
	I1212 01:02:43.677522  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:43.677891  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:43.677919  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:43.677833  142772 retry.go:31] will retry after 1.092518369s: waiting for machine to come up
	I1212 01:02:44.655390  141411 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:02:44.771319  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:44.771721  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:44.771751  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:44.771666  142772 retry.go:31] will retry after 1.147907674s: waiting for machine to come up
	I1212 01:02:45.921165  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:45.921636  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:45.921666  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:45.921589  142772 retry.go:31] will retry after 1.69009103s: waiting for machine to come up
	I1212 01:02:47.614391  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:47.614838  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:47.614863  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:47.614792  142772 retry.go:31] will retry after 1.65610672s: waiting for machine to come up
	I1212 01:02:49.272864  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:49.273312  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:49.273337  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:49.273268  142772 retry.go:31] will retry after 2.50327667s: waiting for machine to come up
	I1212 01:02:51.779671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:51.780077  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:51.780104  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:51.780016  142772 retry.go:31] will retry after 2.808303717s: waiting for machine to come up
	I1212 01:02:54.591866  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:54.592241  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:54.592285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:54.592208  142772 retry.go:31] will retry after 3.689107313s: waiting for machine to come up
	I1212 01:02:58.282552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.282980  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has current primary IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.283005  141469 main.go:141] libmachine: (embed-certs-607268) Found IP for machine: 192.168.50.151
	I1212 01:02:58.283018  141469 main.go:141] libmachine: (embed-certs-607268) Reserving static IP address...
	I1212 01:02:58.283419  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.283441  141469 main.go:141] libmachine: (embed-certs-607268) Reserved static IP address: 192.168.50.151
	I1212 01:02:58.283453  141469 main.go:141] libmachine: (embed-certs-607268) DBG | skip adding static IP to network mk-embed-certs-607268 - found existing host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"}
	I1212 01:02:58.283462  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Getting to WaitForSSH function...
	I1212 01:02:58.283469  141469 main.go:141] libmachine: (embed-certs-607268) Waiting for SSH to be available...
	I1212 01:02:58.285792  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286126  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.286162  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286298  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH client type: external
	I1212 01:02:58.286330  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa (-rw-------)
	I1212 01:02:58.286378  141469 main.go:141] libmachine: (embed-certs-607268) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:02:58.286394  141469 main.go:141] libmachine: (embed-certs-607268) DBG | About to run SSH command:
	I1212 01:02:58.286403  141469 main.go:141] libmachine: (embed-certs-607268) DBG | exit 0
	I1212 01:02:58.407633  141469 main.go:141] libmachine: (embed-certs-607268) DBG | SSH cmd err, output: <nil>: 
	I1212 01:02:58.407985  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetConfigRaw
	I1212 01:02:58.408745  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.411287  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.411642  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411920  141469 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 01:02:58.412117  141469 machine.go:93] provisionDockerMachine start ...
	I1212 01:02:58.412136  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:58.412336  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.414338  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414643  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.414669  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414765  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.414944  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415114  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415259  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.415450  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.415712  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.415724  141469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:02:58.520032  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:02:58.520068  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520312  141469 buildroot.go:166] provisioning hostname "embed-certs-607268"
	I1212 01:02:58.520341  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520539  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.523169  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.523584  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523733  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.523910  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524092  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524252  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.524411  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.524573  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.524584  141469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-607268 && echo "embed-certs-607268" | sudo tee /etc/hostname
	I1212 01:02:58.642175  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-607268
	
	I1212 01:02:58.642232  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.645114  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645480  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.645505  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645698  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.645909  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646063  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646192  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.646334  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.646513  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.646530  141469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-607268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-607268/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-607268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:02:58.758655  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:58.758696  141469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:02:58.758715  141469 buildroot.go:174] setting up certificates
	I1212 01:02:58.758726  141469 provision.go:84] configureAuth start
	I1212 01:02:58.758735  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.759031  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.761749  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762024  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.762052  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762165  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.764356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.764699  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764781  141469 provision.go:143] copyHostCerts
	I1212 01:02:58.764874  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:02:58.764898  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:02:58.764986  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:02:58.765107  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:02:58.765118  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:02:58.765160  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:02:58.765235  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:02:58.765245  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:02:58.765296  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:02:58.765369  141469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-607268 san=[127.0.0.1 192.168.50.151 embed-certs-607268 localhost minikube]
	I1212 01:02:58.890412  141469 provision.go:177] copyRemoteCerts
	I1212 01:02:58.890519  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:02:58.890560  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.892973  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893262  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.893291  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893471  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.893647  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.893761  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.893855  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:58.973652  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:02:58.998097  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:02:59.022028  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:02:59.045859  141469 provision.go:87] duration metric: took 287.094036ms to configureAuth
	I1212 01:02:59.045892  141469 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:02:59.046119  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:02:59.046242  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.048869  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049255  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.049285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049465  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.049642  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049764  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049864  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.049974  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.050181  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.050198  141469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:02:59.276670  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:02:59.276708  141469 machine.go:96] duration metric: took 864.577145ms to provisionDockerMachine
	I1212 01:02:59.276724  141469 start.go:293] postStartSetup for "embed-certs-607268" (driver="kvm2")
	I1212 01:02:59.276747  141469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:02:59.276780  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.277171  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:02:59.277207  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.279974  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280341  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.280387  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280529  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.280738  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.280897  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.281026  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.363091  141469 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:02:59.367476  141469 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:02:59.367503  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:02:59.367618  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:02:59.367749  141469 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:02:59.367844  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:02:59.377895  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:02:59.402410  141469 start.go:296] duration metric: took 125.668908ms for postStartSetup
	I1212 01:02:59.402462  141469 fix.go:56] duration metric: took 19.749561015s for fixHost
	I1212 01:02:59.402485  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.405057  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.405385  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405624  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.405808  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.405974  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.406094  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.406237  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.406423  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.406436  141469 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:02:59.516697  141884 start.go:364] duration metric: took 3m42.810720852s to acquireMachinesLock for "default-k8s-diff-port-076578"
	I1212 01:02:59.516759  141884 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:59.516773  141884 fix.go:54] fixHost starting: 
	I1212 01:02:59.517192  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:59.517241  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:59.533969  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 01:02:59.534367  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:59.534831  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:02:59.534854  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:59.535165  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:59.535362  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:02:59.535499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:02:59.536930  141884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-076578: state=Stopped err=<nil>
	I1212 01:02:59.536951  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	W1212 01:02:59.537093  141884 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:59.538974  141884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-076578" ...
	I1212 01:02:59.516496  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965379.489556963
	
	I1212 01:02:59.516525  141469 fix.go:216] guest clock: 1733965379.489556963
	I1212 01:02:59.516535  141469 fix.go:229] Guest: 2024-12-12 01:02:59.489556963 +0000 UTC Remote: 2024-12-12 01:02:59.40246635 +0000 UTC m=+295.033602018 (delta=87.090613ms)
	I1212 01:02:59.516574  141469 fix.go:200] guest clock delta is within tolerance: 87.090613ms
	I1212 01:02:59.516580  141469 start.go:83] releasing machines lock for "embed-certs-607268", held for 19.863720249s
	I1212 01:02:59.516605  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.516828  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:59.519731  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520075  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.520111  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520202  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520752  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520921  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.521064  141469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:02:59.521131  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.521155  141469 ssh_runner.go:195] Run: cat /version.json
	I1212 01:02:59.521171  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.523724  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.523971  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524036  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524063  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524221  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524374  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524375  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524401  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524553  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.524562  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524719  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524719  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.524837  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.525000  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.628168  141469 ssh_runner.go:195] Run: systemctl --version
	I1212 01:02:59.635800  141469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:02:59.788137  141469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:02:59.795216  141469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:02:59.795289  141469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:02:59.811889  141469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:02:59.811917  141469 start.go:495] detecting cgroup driver to use...
	I1212 01:02:59.811992  141469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:02:59.827154  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:02:59.841138  141469 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:02:59.841193  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:02:59.854874  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:02:59.869250  141469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:02:59.994723  141469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:00.136385  141469 docker.go:233] disabling docker service ...
	I1212 01:03:00.136462  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:00.150966  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:00.163907  141469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:00.340171  141469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:00.480828  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:00.498056  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:00.518273  141469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:00.518339  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.529504  141469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:00.529571  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.540616  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.553419  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.566004  141469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:00.577682  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.589329  141469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.612561  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.625526  141469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:00.635229  141469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:00.635289  141469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:00.657569  141469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:00.669982  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:00.793307  141469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:00.887423  141469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:00.887498  141469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:00.892715  141469 start.go:563] Will wait 60s for crictl version
	I1212 01:03:00.892773  141469 ssh_runner.go:195] Run: which crictl
	I1212 01:03:00.896646  141469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:00.933507  141469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:00.933606  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:00.977011  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:01.008491  141469 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:02:59.540301  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Start
	I1212 01:02:59.540482  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring networks are active...
	I1212 01:02:59.541181  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network default is active
	I1212 01:02:59.541503  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network mk-default-k8s-diff-port-076578 is active
	I1212 01:02:59.541802  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Getting domain xml...
	I1212 01:02:59.542437  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Creating domain...
	I1212 01:03:00.796803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting to get IP...
	I1212 01:03:00.797932  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798386  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798495  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.798404  142917 retry.go:31] will retry after 199.022306ms: waiting for machine to come up
	I1212 01:03:00.999067  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999547  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999572  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.999499  142917 retry.go:31] will retry after 340.093067ms: waiting for machine to come up
	I1212 01:03:01.340839  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341513  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.341437  142917 retry.go:31] will retry after 469.781704ms: waiting for machine to come up
	I1212 01:03:01.009956  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:03:01.012767  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013224  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:03:01.013252  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013471  141469 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:01.017815  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:01.032520  141469 kubeadm.go:883] updating cluster {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:01.032662  141469 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:01.032715  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:01.070406  141469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:01.070478  141469 ssh_runner.go:195] Run: which lz4
	I1212 01:03:01.074840  141469 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:01.079207  141469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:01.079238  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:02.524822  141469 crio.go:462] duration metric: took 1.450020274s to copy over tarball
	I1212 01:03:02.524909  141469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:01.812803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813298  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813335  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.813232  142917 retry.go:31] will retry after 552.327376ms: waiting for machine to come up
	I1212 01:03:02.367682  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368152  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368187  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:02.368106  142917 retry.go:31] will retry after 629.731283ms: waiting for machine to come up
	I1212 01:03:02.999887  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000307  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000339  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.000235  142917 retry.go:31] will retry after 764.700679ms: waiting for machine to come up
	I1212 01:03:03.766389  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766891  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766919  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.766845  142917 retry.go:31] will retry after 920.806371ms: waiting for machine to come up
	I1212 01:03:04.689480  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690029  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:04.689996  142917 retry.go:31] will retry after 1.194297967s: waiting for machine to come up
	I1212 01:03:05.886345  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886729  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886796  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:05.886714  142917 retry.go:31] will retry after 1.60985804s: waiting for machine to come up
	I1212 01:03:04.719665  141469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194717299s)
	I1212 01:03:04.719708  141469 crio.go:469] duration metric: took 2.194851225s to extract the tarball
	I1212 01:03:04.719719  141469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:04.756600  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:04.802801  141469 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:04.802832  141469 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:04.802840  141469 kubeadm.go:934] updating node { 192.168.50.151 8443 v1.31.2 crio true true} ...
	I1212 01:03:04.802949  141469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-607268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:04.803008  141469 ssh_runner.go:195] Run: crio config
	I1212 01:03:04.854778  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:04.854804  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:04.854815  141469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:04.854836  141469 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.151 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-607268 NodeName:embed-certs-607268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:04.854962  141469 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-607268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:04.855023  141469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:04.864877  141469 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:04.864967  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:04.874503  141469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1212 01:03:04.891124  141469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:04.907560  141469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1212 01:03:04.924434  141469 ssh_runner.go:195] Run: grep 192.168.50.151	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:04.928518  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:04.940523  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:05.076750  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:05.094388  141469 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268 for IP: 192.168.50.151
	I1212 01:03:05.094424  141469 certs.go:194] generating shared ca certs ...
	I1212 01:03:05.094440  141469 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:05.094618  141469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:05.094691  141469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:05.094710  141469 certs.go:256] generating profile certs ...
	I1212 01:03:05.094833  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/client.key
	I1212 01:03:05.094916  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key.9253237b
	I1212 01:03:05.094968  141469 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key
	I1212 01:03:05.095131  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:05.095177  141469 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:05.095192  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:05.095224  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:05.095254  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:05.095293  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:05.095359  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:05.096133  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:05.130605  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:05.164694  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:05.206597  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:05.241305  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:03:05.270288  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:05.296137  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:05.320158  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:05.343820  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:05.373277  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:05.397070  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:05.420738  141469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:05.437822  141469 ssh_runner.go:195] Run: openssl version
	I1212 01:03:05.443744  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:05.454523  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459182  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459237  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.465098  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:05.475681  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:05.486396  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490883  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490929  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.496613  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:05.507295  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:05.517980  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522534  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522590  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.528117  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:05.538979  141469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:05.543723  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:05.549611  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:05.555445  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:05.561482  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:05.567221  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:05.573015  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:05.578902  141469 kubeadm.go:392] StartCluster: {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:05.578984  141469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:05.579042  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.619027  141469 cri.go:89] found id: ""
	I1212 01:03:05.619115  141469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:05.629472  141469 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:05.629501  141469 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:05.629567  141469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:05.639516  141469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:05.640491  141469 kubeconfig.go:125] found "embed-certs-607268" server: "https://192.168.50.151:8443"
	I1212 01:03:05.642468  141469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:05.651891  141469 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.151
	I1212 01:03:05.651922  141469 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:05.651934  141469 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:05.651978  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.686414  141469 cri.go:89] found id: ""
	I1212 01:03:05.686501  141469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:05.702724  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:05.712454  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:05.712480  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:05.712531  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:05.721529  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:05.721603  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:05.730897  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:05.739758  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:05.739815  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:05.749089  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.758042  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:05.758104  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.767425  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:05.776195  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:05.776260  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:05.785435  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:05.794795  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:05.918710  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:06.846975  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.072898  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.139677  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.237216  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:07.237336  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:07.738145  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.238219  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.255671  141469 api_server.go:72] duration metric: took 1.018455783s to wait for apiserver process to appear ...
	I1212 01:03:08.255705  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:08.255734  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:08.256408  141469 api_server.go:269] stopped: https://192.168.50.151:8443/healthz: Get "https://192.168.50.151:8443/healthz": dial tcp 192.168.50.151:8443: connect: connection refused
	I1212 01:03:08.756070  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:07.498527  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498942  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498966  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:07.498889  142917 retry.go:31] will retry after 2.278929136s: waiting for machine to come up
	I1212 01:03:09.779321  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779847  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779879  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:09.779793  142917 retry.go:31] will retry after 1.82028305s: waiting for machine to come up
	I1212 01:03:10.630080  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.630121  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.630140  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.674408  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.674470  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.756660  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.763043  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:10.763088  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.256254  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.263457  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.263481  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.756759  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.764019  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.764053  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:12.256627  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:12.262369  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:03:12.270119  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:12.270153  141469 api_server.go:131] duration metric: took 4.014438706s to wait for apiserver health ...
	I1212 01:03:12.270164  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:12.270172  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:12.272148  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:12.273667  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:12.289376  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:12.312620  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:12.323663  141469 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:12.323715  141469 system_pods.go:61] "coredns-7c65d6cfc9-n66x6" [ae2c1ac7-0c17-453d-a05c-70fbf6d81e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:12.323727  141469 system_pods.go:61] "etcd-embed-certs-607268" [811dc3d0-d893-45a0-a5c7-3fee0efd2e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:12.323747  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [11848f2c-215b-4cf4-88f0-93151c55e7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:12.323764  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [4f4066ab-b6e4-4a46-a03b-dda1776c39ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:12.323776  141469 system_pods.go:61] "kube-proxy-9f6lj" [2463030a-d7db-4031-9e26-0a56a9067520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:12.323784  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [c2aeaf4a-7fb8-4bb8-87ea-5401db017fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:12.323795  141469 system_pods.go:61] "metrics-server-6867b74b74-5bms9" [e1a794f9-cf60-486f-a0e8-670dc7dfb4da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:12.323803  141469 system_pods.go:61] "storage-provisioner" [b29860cd-465d-4e70-ad5d-dd17c22ae290] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:12.323820  141469 system_pods.go:74] duration metric: took 11.170811ms to wait for pod list to return data ...
	I1212 01:03:12.323845  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:12.327828  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:12.327863  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:12.327880  141469 node_conditions.go:105] duration metric: took 4.029256ms to run NodePressure ...
	I1212 01:03:12.327902  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:12.638709  141469 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644309  141469 kubeadm.go:739] kubelet initialised
	I1212 01:03:12.644332  141469 kubeadm.go:740] duration metric: took 5.590168ms waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644356  141469 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:12.650768  141469 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:11.601456  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602012  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602044  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:11.601956  142917 retry.go:31] will retry after 2.272258384s: waiting for machine to come up
	I1212 01:03:13.876607  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.876986  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.877024  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:13.876950  142917 retry.go:31] will retry after 4.014936005s: waiting for machine to come up
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:14.657097  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:16.658207  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:17.657933  141469 pod_ready.go:93] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:17.657957  141469 pod_ready.go:82] duration metric: took 5.007165494s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:17.657966  141469 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:17.896127  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896639  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Found IP for machine: 192.168.39.174
	I1212 01:03:17.896659  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserving static IP address...
	I1212 01:03:17.897028  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.897062  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserved static IP address: 192.168.39.174
	I1212 01:03:17.897087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | skip adding static IP to network mk-default-k8s-diff-port-076578 - found existing host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"}
	I1212 01:03:17.897108  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Getting to WaitForSSH function...
	I1212 01:03:17.897126  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for SSH to be available...
	I1212 01:03:17.899355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899727  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.899754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899911  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH client type: external
	I1212 01:03:17.899941  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa (-rw-------)
	I1212 01:03:17.899976  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:17.899989  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | About to run SSH command:
	I1212 01:03:17.900005  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | exit 0
	I1212 01:03:18.036261  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:18.036610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetConfigRaw
	I1212 01:03:18.037352  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.040173  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040570  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.040595  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040866  141884 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 01:03:18.041107  141884 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:18.041134  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.041355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.043609  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.043945  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.043973  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.044142  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.044291  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044466  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.044745  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.044986  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.045002  141884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:18.156161  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:18.156193  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156472  141884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-076578"
	I1212 01:03:18.156499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.159391  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.159871  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.159903  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.160048  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.160244  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160379  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160500  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.160681  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.160898  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.160917  141884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-076578 && echo "default-k8s-diff-port-076578" | sudo tee /etc/hostname
	I1212 01:03:18.285904  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-076578
	
	I1212 01:03:18.285937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.288620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.288987  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.289010  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.289285  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.289491  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289658  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289799  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.289981  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.290190  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.290223  141884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-076578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-076578/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-076578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:18.409683  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:18.409721  141884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:18.409751  141884 buildroot.go:174] setting up certificates
	I1212 01:03:18.409761  141884 provision.go:84] configureAuth start
	I1212 01:03:18.409782  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.410045  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.412393  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412721  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.412756  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.415204  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415502  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.415530  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415663  141884 provision.go:143] copyHostCerts
	I1212 01:03:18.415735  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:18.415757  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:18.415832  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:18.415925  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:18.415933  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:18.415952  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:18.416007  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:18.416015  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:18.416032  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:18.416081  141884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-076578 san=[127.0.0.1 192.168.39.174 default-k8s-diff-port-076578 localhost minikube]
	I1212 01:03:18.502493  141884 provision.go:177] copyRemoteCerts
	I1212 01:03:18.502562  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:18.502594  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.505104  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505377  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.505409  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505568  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.505754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.505892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.506034  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.590425  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:18.616850  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 01:03:18.640168  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:18.664517  141884 provision.go:87] duration metric: took 254.738256ms to configureAuth
	I1212 01:03:18.664542  141884 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:18.664705  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:03:18.664778  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.667425  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.667784  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.667808  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.668004  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.668178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668313  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668448  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.668587  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.668751  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.668772  141884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:18.906880  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:18.906908  141884 machine.go:96] duration metric: took 865.784426ms to provisionDockerMachine
	I1212 01:03:18.906920  141884 start.go:293] postStartSetup for "default-k8s-diff-port-076578" (driver="kvm2")
	I1212 01:03:18.906931  141884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:18.906949  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.907315  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:18.907348  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.909882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910213  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.910242  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910347  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.910542  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.910680  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.910806  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.994819  141884 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:18.998959  141884 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:18.998989  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:18.999069  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:18.999163  141884 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:18.999252  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:19.009226  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:19.032912  141884 start.go:296] duration metric: took 125.973128ms for postStartSetup
	I1212 01:03:19.032960  141884 fix.go:56] duration metric: took 19.516187722s for fixHost
	I1212 01:03:19.032990  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.035623  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.035947  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.035977  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.036151  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.036310  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036438  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036581  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.036738  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:19.036906  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:19.036919  141884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:19.148565  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965399.101726035
	
	I1212 01:03:19.148592  141884 fix.go:216] guest clock: 1733965399.101726035
	I1212 01:03:19.148602  141884 fix.go:229] Guest: 2024-12-12 01:03:19.101726035 +0000 UTC Remote: 2024-12-12 01:03:19.032967067 +0000 UTC m=+242.472137495 (delta=68.758968ms)
	I1212 01:03:19.148628  141884 fix.go:200] guest clock delta is within tolerance: 68.758968ms
	I1212 01:03:19.148635  141884 start.go:83] releasing machines lock for "default-k8s-diff-port-076578", held for 19.631903968s
	I1212 01:03:19.148688  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.149016  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:19.151497  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.151926  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.151954  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.152124  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152598  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152762  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152834  141884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:19.152892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.152952  141884 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:19.152972  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.155620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155694  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.155962  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156057  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.156114  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156123  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156316  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156327  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156469  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156583  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156619  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156826  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.156824  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.268001  141884 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:19.275696  141884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:19.426624  141884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:19.432842  141884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:19.432911  141884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:19.449082  141884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:19.449108  141884 start.go:495] detecting cgroup driver to use...
	I1212 01:03:19.449187  141884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:19.466543  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:19.482668  141884 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:19.482733  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:19.497124  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:19.512626  141884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:19.624948  141884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:19.779469  141884 docker.go:233] disabling docker service ...
	I1212 01:03:19.779545  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:19.794888  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:19.810497  141884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:19.954827  141884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:20.086435  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:20.100917  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:20.120623  141884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:20.120683  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.134353  141884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:20.134431  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.150373  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.165933  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.181524  141884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:20.196891  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.209752  141884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.228990  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.241553  141884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:20.251819  141884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:20.251883  141884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:20.267155  141884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:20.277683  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:20.427608  141884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:20.525699  141884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:20.525804  141884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:20.530984  141884 start.go:563] Will wait 60s for crictl version
	I1212 01:03:20.531055  141884 ssh_runner.go:195] Run: which crictl
	I1212 01:03:20.535013  141884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:20.576177  141884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:20.576251  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.605529  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.638175  141884 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:20.639475  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:20.642566  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643001  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:20.643034  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643196  141884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:20.647715  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:20.662215  141884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:20.662337  141884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:20.662381  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:20.705014  141884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:20.705112  141884 ssh_runner.go:195] Run: which lz4
	I1212 01:03:20.709477  141884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:20.714111  141884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:20.714145  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:19.666527  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:21.666676  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:24.165316  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:22.285157  141884 crio.go:462] duration metric: took 1.575709911s to copy over tarball
	I1212 01:03:22.285258  141884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:24.495814  141884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210502234s)
	I1212 01:03:24.495848  141884 crio.go:469] duration metric: took 2.210655432s to extract the tarball
	I1212 01:03:24.495857  141884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:24.533396  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:24.581392  141884 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:24.581419  141884 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:24.581428  141884 kubeadm.go:934] updating node { 192.168.39.174 8444 v1.31.2 crio true true} ...
	I1212 01:03:24.581524  141884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-076578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:24.581594  141884 ssh_runner.go:195] Run: crio config
	I1212 01:03:24.625042  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:24.625073  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:24.625083  141884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:24.625111  141884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-076578 NodeName:default-k8s-diff-port-076578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:24.625238  141884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-076578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:24.625313  141884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:24.635818  141884 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:24.635903  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:24.645966  141884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:03:24.665547  141884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:24.682639  141884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1212 01:03:24.700147  141884 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:24.704172  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:24.716697  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:24.842374  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:24.860641  141884 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578 for IP: 192.168.39.174
	I1212 01:03:24.860676  141884 certs.go:194] generating shared ca certs ...
	I1212 01:03:24.860700  141884 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:24.860888  141884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:24.860955  141884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:24.860970  141884 certs.go:256] generating profile certs ...
	I1212 01:03:24.861110  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.key
	I1212 01:03:24.861200  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key.4a68806a
	I1212 01:03:24.861251  141884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key
	I1212 01:03:24.861391  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:24.861444  141884 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:24.861458  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:24.861498  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:24.861535  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:24.861565  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:24.861629  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:24.862588  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:24.899764  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:24.950373  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:24.983222  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:25.017208  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 01:03:25.042653  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:25.071358  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:25.097200  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:25.122209  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:25.150544  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:25.181427  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:25.210857  141884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:25.229580  141884 ssh_runner.go:195] Run: openssl version
	I1212 01:03:25.236346  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:25.247510  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252355  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252407  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.258511  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:25.272698  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:25.289098  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295737  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295806  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.304133  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:25.315805  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:25.328327  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333482  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333539  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.339367  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:25.351612  141884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:25.357060  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:25.363452  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:25.369984  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:25.376434  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:25.382895  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:25.389199  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:25.395232  141884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:25.395325  141884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:25.395370  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.439669  141884 cri.go:89] found id: ""
	I1212 01:03:25.439749  141884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:25.453870  141884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:25.453893  141884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:25.453951  141884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:25.464552  141884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:25.465609  141884 kubeconfig.go:125] found "default-k8s-diff-port-076578" server: "https://192.168.39.174:8444"
	I1212 01:03:25.467767  141884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:25.477907  141884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I1212 01:03:25.477943  141884 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:25.477958  141884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:25.478018  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.521891  141884 cri.go:89] found id: ""
	I1212 01:03:25.521978  141884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:25.539029  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:25.549261  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:25.549283  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:25.549341  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:03:25.558948  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:25.559022  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:25.568947  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:03:25.579509  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:25.579614  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:25.589573  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.600434  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:25.600498  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.610337  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:03:25.619956  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:25.620014  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:25.631231  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:25.641366  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:25.761159  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:26.165525  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:28.168457  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.168492  141469 pod_ready.go:82] duration metric: took 10.510517291s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.168506  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175334  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.175361  141469 pod_ready.go:82] duration metric: took 6.84531ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175375  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183060  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.183093  141469 pod_ready.go:82] duration metric: took 7.709158ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183106  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.190999  141469 pod_ready.go:93] pod "kube-proxy-9f6lj" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.191028  141469 pod_ready.go:82] duration metric: took 7.913069ms for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.191040  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199945  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.199972  141469 pod_ready.go:82] duration metric: took 8.923682ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199984  141469 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:26.830271  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069070962s)
	I1212 01:03:26.830326  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.035935  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.113317  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.210226  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:27.210329  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:27.710504  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.211114  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.242967  141884 api_server.go:72] duration metric: took 1.032736901s to wait for apiserver process to appear ...
	I1212 01:03:28.243012  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:28.243038  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:28.243643  141884 api_server.go:269] stopped: https://192.168.39.174:8444/healthz: Get "https://192.168.39.174:8444/healthz": dial tcp 192.168.39.174:8444: connect: connection refused
	I1212 01:03:28.743921  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.546075  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.546113  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.546129  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.621583  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.621619  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.743860  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.750006  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:31.750052  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.243382  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.269990  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.270033  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.743516  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.752979  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.753012  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:33.243571  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:33.247902  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:03:33.253786  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:33.253810  141884 api_server.go:131] duration metric: took 5.010790107s to wait for apiserver health ...
	I1212 01:03:33.253820  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:33.253826  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:33.255762  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:30.208396  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:32.708024  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:33.257007  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:33.267934  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:33.286197  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:33.297934  141884 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:33.297982  141884 system_pods.go:61] "coredns-7c65d6cfc9-xn886" [db1f42f1-93d9-4942-813d-e3de1cc24801] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:33.297995  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [25555578-8169-4986-aa10-06a442152c50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:33.298006  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [1004c64c-91ca-43c3-9c3d-43dab13d3812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:33.298023  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [63d42313-4ea9-44f9-a8eb-b0c6c73424c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:33.298039  141884 system_pods.go:61] "kube-proxy-7frgh" [191ed421-4297-47c7-a46d-407a8eaa0378] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:33.298049  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [1506a505-697c-4b80-b7ef-55de1116fa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:33.298060  141884 system_pods.go:61] "metrics-server-6867b74b74-k9s7n" [806badc0-b609-421f-9203-3fd91212a145] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:33.298077  141884 system_pods.go:61] "storage-provisioner" [bc133673-b7e2-42b2-98ac-e3284c9162ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:33.298090  141884 system_pods.go:74] duration metric: took 11.875762ms to wait for pod list to return data ...
	I1212 01:03:33.298105  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:33.302482  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:33.302517  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:33.302532  141884 node_conditions.go:105] duration metric: took 4.418219ms to run NodePressure ...
	I1212 01:03:33.302555  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:33.728028  141884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735780  141884 kubeadm.go:739] kubelet initialised
	I1212 01:03:33.735810  141884 kubeadm.go:740] duration metric: took 7.738781ms waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735824  141884 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:33.743413  141884 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:35.750012  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.348909  141411 start.go:364] duration metric: took 54.693436928s to acquireMachinesLock for "no-preload-242725"
	I1212 01:03:39.348976  141411 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:39.348990  141411 fix.go:54] fixHost starting: 
	I1212 01:03:39.349442  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:39.349485  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:39.367203  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1212 01:03:39.367584  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:39.368158  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:03:39.368185  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:39.368540  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:39.368717  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:39.368854  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:03:39.370433  141411 fix.go:112] recreateIfNeeded on no-preload-242725: state=Stopped err=<nil>
	I1212 01:03:39.370460  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	W1212 01:03:39.370594  141411 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:39.372621  141411 out.go:177] * Restarting existing kvm2 VM for "no-preload-242725" ...
	I1212 01:03:35.206417  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.208384  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:37.750453  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.752224  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.374093  141411 main.go:141] libmachine: (no-preload-242725) Calling .Start
	I1212 01:03:39.374303  141411 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 01:03:39.375021  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 01:03:39.375456  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 01:03:39.375951  141411 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 01:03:39.376726  141411 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 01:03:40.703754  141411 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 01:03:40.705274  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.705752  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.705821  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.705709  143226 retry.go:31] will retry after 196.576482ms: waiting for machine to come up
	I1212 01:03:40.904341  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.904718  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.904740  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.904669  143226 retry.go:31] will retry after 375.936901ms: waiting for machine to come up
	I1212 01:03:41.282278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.282839  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.282871  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.282793  143226 retry.go:31] will retry after 427.731576ms: waiting for machine to come up
	I1212 01:03:41.712553  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.713198  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.713231  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.713084  143226 retry.go:31] will retry after 421.07445ms: waiting for machine to come up
	I1212 01:03:39.707174  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:41.711103  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.207685  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:41.768335  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.252708  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.754333  141884 pod_ready.go:93] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.754362  141884 pod_ready.go:82] duration metric: took 11.010925207s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.754371  141884 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760121  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.760142  141884 pod_ready.go:82] duration metric: took 5.764171ms for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760151  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765554  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.765575  141884 pod_ready.go:82] duration metric: took 5.417017ms for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765589  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:42.135878  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.136341  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.136367  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.136284  143226 retry.go:31] will retry after 477.81881ms: waiting for machine to come up
	I1212 01:03:42.616400  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.616906  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.616929  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.616858  143226 retry.go:31] will retry after 597.608319ms: waiting for machine to come up
	I1212 01:03:43.215837  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:43.216430  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:43.216454  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:43.216363  143226 retry.go:31] will retry after 1.118837214s: waiting for machine to come up
	I1212 01:03:44.336666  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:44.337229  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:44.337253  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:44.337187  143226 retry.go:31] will retry after 1.008232952s: waiting for machine to come up
	I1212 01:03:45.346868  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:45.347386  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:45.347423  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:45.347307  143226 retry.go:31] will retry after 1.735263207s: waiting for machine to come up
	I1212 01:03:47.084570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:47.084980  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:47.085012  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:47.084931  143226 retry.go:31] will retry after 1.662677797s: waiting for machine to come up
	I1212 01:03:46.208324  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.707694  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:48.069316  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.069362  141884 pod_ready.go:82] duration metric: took 3.303763458s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.069380  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328758  141884 pod_ready.go:93] pod "kube-proxy-7frgh" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.328784  141884 pod_ready.go:82] duration metric: took 259.396178ms for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328798  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337082  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.337106  141884 pod_ready.go:82] duration metric: took 8.298777ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337119  141884 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:50.343458  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.748914  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:48.749510  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:48.749535  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:48.749475  143226 retry.go:31] will retry after 2.670904101s: waiting for machine to come up
	I1212 01:03:51.421499  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:51.421915  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:51.421961  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:51.421862  143226 retry.go:31] will retry after 3.566697123s: waiting for machine to come up
	I1212 01:03:50.708435  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:53.207675  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.344098  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.344166  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:56.345835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.990515  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:54.990916  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:54.990941  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:54.990869  143226 retry.go:31] will retry after 4.288131363s: waiting for machine to come up
	I1212 01:03:55.706167  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:57.707796  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.843944  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.844210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:59.284312  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.284807  141411 main.go:141] libmachine: (no-preload-242725) Found IP for machine: 192.168.61.222
	I1212 01:03:59.284834  141411 main.go:141] libmachine: (no-preload-242725) Reserving static IP address...
	I1212 01:03:59.284851  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has current primary IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.285300  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.285334  141411 main.go:141] libmachine: (no-preload-242725) DBG | skip adding static IP to network mk-no-preload-242725 - found existing host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"}
	I1212 01:03:59.285357  141411 main.go:141] libmachine: (no-preload-242725) Reserved static IP address: 192.168.61.222
	I1212 01:03:59.285376  141411 main.go:141] libmachine: (no-preload-242725) Waiting for SSH to be available...
	I1212 01:03:59.285390  141411 main.go:141] libmachine: (no-preload-242725) DBG | Getting to WaitForSSH function...
	I1212 01:03:59.287532  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287840  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.287869  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287970  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH client type: external
	I1212 01:03:59.287998  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa (-rw-------)
	I1212 01:03:59.288043  141411 main.go:141] libmachine: (no-preload-242725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:59.288066  141411 main.go:141] libmachine: (no-preload-242725) DBG | About to run SSH command:
	I1212 01:03:59.288092  141411 main.go:141] libmachine: (no-preload-242725) DBG | exit 0
	I1212 01:03:59.415723  141411 main.go:141] libmachine: (no-preload-242725) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:59.416104  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 01:03:59.416755  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.419446  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.419848  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.419879  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.420182  141411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 01:03:59.420388  141411 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:59.420412  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:59.420637  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.422922  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423257  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.423278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423432  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.423626  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423787  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423918  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.424051  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.424222  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.424231  141411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:59.536768  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:59.536796  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537016  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:03:59.537042  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537234  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.539806  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540110  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.540141  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540337  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.540509  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540665  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540800  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.540973  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.541155  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.541171  141411 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-242725 && echo "no-preload-242725" | sudo tee /etc/hostname
	I1212 01:03:59.668244  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-242725
	
	I1212 01:03:59.668269  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.671021  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671353  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.671374  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671630  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.671851  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672000  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672160  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.672310  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.672485  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.672502  141411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-242725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-242725/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-242725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:59.792950  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:59.792985  141411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:59.793011  141411 buildroot.go:174] setting up certificates
	I1212 01:03:59.793024  141411 provision.go:84] configureAuth start
	I1212 01:03:59.793041  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.793366  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.796185  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796599  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.796638  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796783  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.799165  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799532  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.799558  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799711  141411 provision.go:143] copyHostCerts
	I1212 01:03:59.799780  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:59.799804  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:59.799869  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:59.800004  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:59.800015  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:59.800051  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:59.800144  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:59.800155  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:59.800182  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:59.800263  141411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.no-preload-242725 san=[127.0.0.1 192.168.61.222 localhost minikube no-preload-242725]
	I1212 01:03:59.987182  141411 provision.go:177] copyRemoteCerts
	I1212 01:03:59.987249  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:59.987290  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.989902  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990285  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.990317  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990520  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.990712  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.990856  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.990981  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.078289  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:00.103149  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:04:00.131107  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:04:00.159076  141411 provision.go:87] duration metric: took 366.034024ms to configureAuth
	I1212 01:04:00.159103  141411 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:04:00.159305  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:04:00.159401  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.162140  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162537  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.162570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162696  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.162864  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163016  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163124  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.163262  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.163436  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.163451  141411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:00.407729  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:00.407758  141411 machine.go:96] duration metric: took 987.35601ms to provisionDockerMachine
	I1212 01:04:00.407773  141411 start.go:293] postStartSetup for "no-preload-242725" (driver="kvm2")
	I1212 01:04:00.407787  141411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:00.407810  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.408186  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:00.408218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.410950  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411329  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.411360  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411585  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.411809  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.411981  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.412115  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.498221  141411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:00.502621  141411 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:04:00.502644  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:04:00.502705  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:04:00.502779  141411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:04:00.502863  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:00.512322  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:00.540201  141411 start.go:296] duration metric: took 132.410555ms for postStartSetup
	I1212 01:04:00.540250  141411 fix.go:56] duration metric: took 21.191260423s for fixHost
	I1212 01:04:00.540287  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.542631  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.542983  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.543011  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.543212  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.543393  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543556  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543702  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.543867  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.544081  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.544095  141411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:04:00.656532  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965440.609922961
	
	I1212 01:04:00.656560  141411 fix.go:216] guest clock: 1733965440.609922961
	I1212 01:04:00.656569  141411 fix.go:229] Guest: 2024-12-12 01:04:00.609922961 +0000 UTC Remote: 2024-12-12 01:04:00.540255801 +0000 UTC m=+358.475944555 (delta=69.66716ms)
	I1212 01:04:00.656597  141411 fix.go:200] guest clock delta is within tolerance: 69.66716ms
	I1212 01:04:00.656616  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 21.307670093s
	I1212 01:04:00.656644  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.656898  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:00.659345  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659694  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.659722  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659878  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660405  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660584  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660663  141411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:00.660731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.660751  141411 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:00.660771  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.663331  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663458  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663717  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663757  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663789  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663802  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663867  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664039  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664044  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664201  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664202  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664359  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664359  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.664490  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.777379  141411 ssh_runner.go:195] Run: systemctl --version
	I1212 01:04:00.783765  141411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:04:00.933842  141411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:04:00.941376  141411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:04:00.941441  141411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:04:00.958993  141411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:04:00.959021  141411 start.go:495] detecting cgroup driver to use...
	I1212 01:04:00.959084  141411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:04:00.977166  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:04:00.991166  141411 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:04:00.991231  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:04:01.004993  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:04:01.018654  141411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:04:01.136762  141411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:04:01.300915  141411 docker.go:233] disabling docker service ...
	I1212 01:04:01.301036  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:04:01.316124  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:04:01.329544  141411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:04:01.451034  141411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:04:01.583471  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:04:01.611914  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:04:01.632628  141411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:04:01.632706  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.644315  141411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:04:01.644384  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.656980  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.668295  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.679885  141411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:04:01.692032  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.703893  141411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.724486  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.737251  141411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:04:01.748955  141411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:04:01.749025  141411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:04:01.763688  141411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:04:01.773871  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:01.903690  141411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:04:02.006921  141411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:04:02.007013  141411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:04:02.013116  141411 start.go:563] Will wait 60s for crictl version
	I1212 01:04:02.013187  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.017116  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:04:02.061210  141411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:04:02.061304  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.093941  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.124110  141411 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:59.708028  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:01.709056  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:04.207527  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.845618  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.346194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:02.125647  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:02.128481  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.128914  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:02.128973  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.129205  141411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:04:02.133801  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:02.148892  141411 kubeadm.go:883] updating cluster {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:04:02.149001  141411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:04:02.149033  141411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:04:02.187762  141411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:04:02.187805  141411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:04:02.187934  141411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.187988  141411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.188025  141411 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.188070  141411 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.188118  141411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.188220  141411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.188332  141411 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 01:04:02.188501  141411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.189594  141411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.189674  141411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.189892  141411 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.190015  141411 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 01:04:02.190121  141411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.190152  141411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.190169  141411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.190746  141411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.372557  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.375185  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.389611  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.394581  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.396799  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.408346  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1212 01:04:02.413152  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.438165  141411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1212 01:04:02.438217  141411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.438272  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.518752  141411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1212 01:04:02.518804  141411 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.518856  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.556287  141411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1212 01:04:02.556329  141411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.556371  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569629  141411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1212 01:04:02.569671  141411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.569683  141411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1212 01:04:02.569721  141411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.569731  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569770  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667454  141411 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1212 01:04:02.667511  141411 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.667510  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.667532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.667549  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667632  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.667644  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.667671  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.683807  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.784024  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.797709  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.797836  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.797848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.797969  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.822411  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.880580  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.927305  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.928532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.928661  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.938172  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.973083  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:03.023699  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 01:04:03.023813  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.069822  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 01:04:03.069879  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 01:04:03.069920  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 01:04:03.069945  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:03.069973  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:03.069990  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:03.070037  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 01:04:03.070116  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:03.094188  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1212 01:04:03.094210  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094229  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1212 01:04:03.094249  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094285  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1212 01:04:03.094313  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1212 01:04:03.094379  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1212 01:04:03.094399  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 01:04:03.094480  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:04.469173  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.174822  141411 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.080313699s)
	I1212 01:04:05.174869  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1212 01:04:05.174899  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.08062641s)
	I1212 01:04:05.174928  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1212 01:04:05.174968  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.174994  141411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 01:04:05.175034  141411 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.175086  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:05.175038  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.179340  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:06.207626  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:08.706815  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.843908  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:07.654693  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.479543185s)
	I1212 01:04:07.654721  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1212 01:04:07.654743  141411 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.654775  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.475408038s)
	I1212 01:04:07.654848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:07.654784  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.699286  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:09.647620  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.948278157s)
	I1212 01:04:09.647642  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.992718083s)
	I1212 01:04:09.647662  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1212 01:04:09.647683  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:04:09.647686  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647734  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647776  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:09.652886  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 01:04:11.112349  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.464585062s)
	I1212 01:04:11.112384  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1212 01:04:11.112412  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.112462  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.206933  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.208623  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.844442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:14.845189  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.083753  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.971262547s)
	I1212 01:04:13.083788  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1212 01:04:13.083821  141411 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:13.083878  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:17.087777  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.003870257s)
	I1212 01:04:17.087818  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1212 01:04:17.087853  141411 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:17.087917  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:15.707981  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:18.207205  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.345467  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:19.845255  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:17.734979  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:04:17.735041  141411 cache_images.go:123] Successfully loaded all cached images
	I1212 01:04:17.735049  141411 cache_images.go:92] duration metric: took 15.547226992s to LoadCachedImages
	I1212 01:04:17.735066  141411 kubeadm.go:934] updating node { 192.168.61.222 8443 v1.31.2 crio true true} ...
	I1212 01:04:17.735209  141411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-242725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:04:17.735311  141411 ssh_runner.go:195] Run: crio config
	I1212 01:04:17.780826  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:17.780850  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:17.780859  141411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:04:17.780882  141411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.222 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-242725 NodeName:no-preload-242725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:04:17.781025  141411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-242725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:04:17.781091  141411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:04:17.792290  141411 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:04:17.792374  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:04:17.802686  141411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:04:17.819496  141411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:04:17.836164  141411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1212 01:04:17.855844  141411 ssh_runner.go:195] Run: grep 192.168.61.222	control-plane.minikube.internal$ /etc/hosts
	I1212 01:04:17.860034  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:17.874418  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:18.011357  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:04:18.028641  141411 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725 for IP: 192.168.61.222
	I1212 01:04:18.028666  141411 certs.go:194] generating shared ca certs ...
	I1212 01:04:18.028683  141411 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:04:18.028880  141411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:04:18.028940  141411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:04:18.028954  141411 certs.go:256] generating profile certs ...
	I1212 01:04:18.029088  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.key
	I1212 01:04:18.029164  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key.f2ca822e
	I1212 01:04:18.029235  141411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key
	I1212 01:04:18.029404  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:04:18.029438  141411 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:04:18.029449  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:04:18.029485  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:04:18.029517  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:04:18.029555  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:04:18.029621  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:18.030313  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:04:18.082776  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:04:18.116012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:04:18.147385  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:04:18.180861  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:04:18.225067  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:04:18.255999  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:04:18.280193  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:04:18.304830  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:04:18.329012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:04:18.355462  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:04:18.379991  141411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:04:18.397637  141411 ssh_runner.go:195] Run: openssl version
	I1212 01:04:18.403727  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:04:18.415261  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419809  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419885  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.425687  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:04:18.438938  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:04:18.452150  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457050  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457116  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.463151  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:04:18.476193  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:04:18.489034  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493916  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493969  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.500285  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:04:18.513016  141411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:04:18.517996  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:04:18.524465  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:04:18.530607  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:04:18.536857  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:04:18.542734  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:04:18.548786  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:04:18.554771  141411 kubeadm.go:392] StartCluster: {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:04:18.554897  141411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:04:18.554950  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.593038  141411 cri.go:89] found id: ""
	I1212 01:04:18.593131  141411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:04:18.604527  141411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:04:18.604550  141411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:04:18.604605  141411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:04:18.614764  141411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:04:18.616082  141411 kubeconfig.go:125] found "no-preload-242725" server: "https://192.168.61.222:8443"
	I1212 01:04:18.618611  141411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:04:18.628709  141411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.222
	I1212 01:04:18.628741  141411 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:04:18.628753  141411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:04:18.628814  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.673970  141411 cri.go:89] found id: ""
	I1212 01:04:18.674067  141411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:04:18.692603  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:04:18.704916  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:04:18.704940  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:04:18.704999  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:04:18.714952  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:04:18.715015  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:04:18.724982  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:04:18.734756  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:04:18.734817  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:04:18.744528  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.753898  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:04:18.753955  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.763929  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:04:18.773108  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:04:18.773153  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:04:18.782710  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:04:18.792750  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:18.902446  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.056638  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154145942s)
	I1212 01:04:20.056677  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.275475  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.348697  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.483317  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:04:20.483487  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.983704  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.484485  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.526353  141411 api_server.go:72] duration metric: took 1.043031812s to wait for apiserver process to appear ...
	I1212 01:04:21.526389  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:04:21.526415  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:20.207458  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:22.212936  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.362548  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.362574  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.362586  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.380904  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.380939  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.527174  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.533112  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:24.533146  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.026678  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.031368  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.031409  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.526576  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.532260  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.532297  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:26.026741  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:26.031841  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:04:26.038198  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:04:26.038228  141411 api_server.go:131] duration metric: took 4.511829936s to wait for apiserver health ...
	I1212 01:04:26.038240  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:26.038249  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:26.040150  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:04:22.343994  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:24.344818  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.346428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.041669  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:04:26.055010  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:04:26.076860  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:04:26.092122  141411 system_pods.go:59] 8 kube-system pods found
	I1212 01:04:26.092154  141411 system_pods.go:61] "coredns-7c65d6cfc9-7w9dc" [878bfb78-fae5-4e05-b0ae-362841eace85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:04:26.092163  141411 system_pods.go:61] "etcd-no-preload-242725" [ed97c029-7933-4f4e-ab6c-f514b963ce21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:04:26.092170  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [df66d12b-b847-4ef3-b610-5679ff50e8c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:04:26.092175  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [eb5bc914-4267-41e8-9b37-26b7d3da9f68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:04:26.092180  141411 system_pods.go:61] "kube-proxy-rjwps" [fccefb3e-a282-4f0e-9070-11cc95bca868] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:04:26.092185  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [139de4ad-468c-4f1b-becf-3708bcaa7c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:04:26.092190  141411 system_pods.go:61] "metrics-server-6867b74b74-xzkbn" [16e0364c-18f9-43c2-9394-bc8548ce9caa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:04:26.092194  141411 system_pods.go:61] "storage-provisioner" [06c3232e-011a-4aff-b3ca-81858355bef4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:04:26.092200  141411 system_pods.go:74] duration metric: took 15.315757ms to wait for pod list to return data ...
	I1212 01:04:26.092208  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:04:26.095691  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:04:26.095715  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:04:26.095725  141411 node_conditions.go:105] duration metric: took 3.513466ms to run NodePressure ...
	I1212 01:04:26.095742  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:26.389652  141411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398484  141411 kubeadm.go:739] kubelet initialised
	I1212 01:04:26.398513  141411 kubeadm.go:740] duration metric: took 8.824036ms waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398524  141411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:04:26.406667  141411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.416093  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416137  141411 pod_ready.go:82] duration metric: took 9.418311ms for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.416151  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416165  141411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.422922  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422951  141411 pod_ready.go:82] duration metric: took 6.774244ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.422962  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422971  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.429822  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429854  141411 pod_ready.go:82] duration metric: took 6.874602ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.429866  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429875  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.483542  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483578  141411 pod_ready.go:82] duration metric: took 53.690915ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.483609  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483622  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:24.707572  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:27.207073  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.843868  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.844684  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:28.081872  141411 pod_ready.go:93] pod "kube-proxy-rjwps" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:28.081901  141411 pod_ready.go:82] duration metric: took 1.598267411s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:28.081921  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:30.088965  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:32.099574  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:29.706557  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:31.706767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:33.706983  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.344074  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.345401  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:34.588690  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:34.588715  141411 pod_ready.go:82] duration metric: took 6.50678624s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:34.588727  141411 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:36.596475  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:36.207357  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:38.207516  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.844602  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.845115  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.095215  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:41.594487  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.708001  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:42.708477  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.344150  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.844336  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:43.595231  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:46.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.708857  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:47.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.207408  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.844848  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.344941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:48.595492  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.095830  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.706544  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.843857  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:54.345207  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.095934  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.598377  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.706720  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.707883  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:56.843562  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.843654  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:01.343428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.095640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.596162  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.207281  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:02.706000  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:03.344025  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.844236  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:03.095197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.096865  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:04.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.208202  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:08.343968  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:10.844912  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.595940  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.596206  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.094527  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.707469  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.206124  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.206285  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:13.343045  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.343706  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.096430  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.097196  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.207697  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:17.344863  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:19.348835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.595465  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:20.708087  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:22.708298  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:21.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:24.344442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:26.344638  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:23.095266  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.096246  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.097041  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.208225  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.706184  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:28.850925  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.344959  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.595032  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.595989  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.706696  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:32.206728  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.206974  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:33.844343  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.343847  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.096631  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.594963  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.707213  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:39.207473  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:38.842756  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.845163  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:38.595482  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.595779  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:41.707367  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:44.206693  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:43.343827  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.346793  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:43.095993  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.096437  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.206828  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:48.708067  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.843200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:49.844564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:47.595931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:50.095312  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.096092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:51.206349  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:53.206853  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.344518  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.344667  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.594738  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:56.595036  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:57.207827  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.845550  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.344473  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:58.595556  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:00.595723  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.706748  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.709131  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:01.842981  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.843341  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.843811  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:02.597066  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:04.597793  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:07.095874  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:06.206955  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:08.208809  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:07.844408  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.343200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:09.595982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:12.094513  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.708379  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:13.207302  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:12.343941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.843333  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.095296  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:16.596411  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:15.706990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.707150  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.344804  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.347642  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.096235  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.594712  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.707303  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.707482  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:24.208937  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:21.842975  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.845498  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.343574  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.596224  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.094625  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.707610  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:29.208425  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:28.843986  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.343502  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:28.096043  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.594875  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.707976  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:34.208061  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:33.843106  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.344148  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:33.095175  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.095369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:37.095797  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.707296  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.207875  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:38.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.343589  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.594883  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.594919  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.707035  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.707860  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:43.346470  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.845269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.595640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.596624  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.708204  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:48.206947  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:48.344831  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.345010  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:47.596708  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.094517  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:52.094931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.706903  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:53.207824  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:52.843114  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.845370  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.095381  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:56.097745  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:55.207916  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.708907  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.344917  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:59.844269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:58.595415  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:01.095794  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.208708  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:02.707173  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:02.342899  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:04.343072  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:03.596239  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.094842  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:05.206813  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:07.209422  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 01:07:08.842564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.843885  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:08.095189  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.096580  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:09.707537  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.205862  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.207175  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:12.844629  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:15.344452  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.594761  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.595360  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:17.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.707535  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.208767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:17.851516  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:20.343210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.596696  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.095982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:21.706245  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:23.707282  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:22.344482  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.844735  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.594999  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:26.595500  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:25.707950  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.200781  141469 pod_ready.go:82] duration metric: took 4m0.000776844s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:28.200837  141469 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:28.200866  141469 pod_ready.go:39] duration metric: took 4m15.556500045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:28.200916  141469 kubeadm.go:597] duration metric: took 4m22.571399912s to restartPrimaryControlPlane
	W1212 01:07:28.201043  141469 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:28.201086  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:27.343233  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:29.344194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.596410  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.096062  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.843547  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.843911  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:36.343719  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.595369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:35.600094  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:38.844539  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.844997  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:38.096180  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:43.343026  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.343118  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:42.595856  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.096338  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.343485  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:48.337317  141884 pod_ready.go:82] duration metric: took 4m0.000178627s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:48.337358  141884 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:48.337386  141884 pod_ready.go:39] duration metric: took 4m14.601527023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:48.337421  141884 kubeadm.go:597] duration metric: took 4m22.883520304s to restartPrimaryControlPlane
	W1212 01:07:48.337486  141884 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:48.337526  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:47.595092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:50.096774  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.514069  141469 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312952103s)
	I1212 01:07:54.514153  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:54.543613  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:54.555514  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:54.569001  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:54.569024  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:54.569082  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:54.583472  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:54.583553  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:54.598721  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:54.614369  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:54.614451  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:54.625630  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.643317  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:54.643398  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.652870  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:54.662703  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:54.662774  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:54.672601  141469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:54.722949  141469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:07:54.723064  141469 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:54.845332  141469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:54.845476  141469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:54.845623  141469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:07:54.855468  141469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:54.857558  141469 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:54.857689  141469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:54.857774  141469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:54.857890  141469 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:54.857960  141469 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:54.858038  141469 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:54.858109  141469 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:54.858214  141469 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:54.858296  141469 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:54.858396  141469 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:54.858503  141469 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:54.858557  141469 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:54.858643  141469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:55.129859  141469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:55.274235  141469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:07:55.401999  141469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:56.015091  141469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:56.123268  141469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:56.123820  141469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:56.126469  141469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:52.595027  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:57.096606  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:56.128185  141469 out.go:235]   - Booting up control plane ...
	I1212 01:07:56.128343  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:56.128478  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:56.128577  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:56.149476  141469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:56.156042  141469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:56.156129  141469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:56.292423  141469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:07:56.292567  141469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:07:56.794594  141469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.027526ms
	I1212 01:07:56.794711  141469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:07:59.594764  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:01.595663  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:02.299386  141469 kubeadm.go:310] [api-check] The API server is healthy after 5.503154599s
	I1212 01:08:02.311549  141469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:02.326944  141469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:02.354402  141469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:02.354661  141469 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-607268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:02.368168  141469 kubeadm.go:310] [bootstrap-token] Using token: 0eo07f.wy46ulxfywwd0uy8
	I1212 01:08:02.369433  141469 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:02.369569  141469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:02.381945  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:02.407880  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:02.419211  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:02.426470  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:02.437339  141469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:02.708518  141469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:03.143189  141469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:03.704395  141469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:03.705460  141469 kubeadm.go:310] 
	I1212 01:08:03.705557  141469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:03.705576  141469 kubeadm.go:310] 
	I1212 01:08:03.705646  141469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:03.705650  141469 kubeadm.go:310] 
	I1212 01:08:03.705672  141469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:03.705724  141469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:03.705768  141469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:03.705800  141469 kubeadm.go:310] 
	I1212 01:08:03.705906  141469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:03.705918  141469 kubeadm.go:310] 
	I1212 01:08:03.705976  141469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:03.705987  141469 kubeadm.go:310] 
	I1212 01:08:03.706073  141469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:03.706191  141469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:03.706286  141469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:03.706307  141469 kubeadm.go:310] 
	I1212 01:08:03.706438  141469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:03.706549  141469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:03.706556  141469 kubeadm.go:310] 
	I1212 01:08:03.706670  141469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.706833  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:03.706864  141469 kubeadm.go:310] 	--control-plane 
	I1212 01:08:03.706869  141469 kubeadm.go:310] 
	I1212 01:08:03.706951  141469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:03.706963  141469 kubeadm.go:310] 
	I1212 01:08:03.707035  141469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.707134  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:03.708092  141469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:03.708135  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:08:03.708146  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:03.709765  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:03.711315  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:03.724767  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:03.749770  141469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:03.749830  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:03.749896  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-607268 minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=embed-certs-607268 minikube.k8s.io/primary=true
	I1212 01:08:03.973050  141469 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:03.973436  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.094838  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:06.095216  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:04.473952  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.974222  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.473799  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.974261  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.473492  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.974288  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.474064  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.974218  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:08.081567  141469 kubeadm.go:1113] duration metric: took 4.331794716s to wait for elevateKubeSystemPrivileges
	I1212 01:08:08.081603  141469 kubeadm.go:394] duration metric: took 5m2.502707851s to StartCluster
	I1212 01:08:08.081629  141469 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.081722  141469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:08.083443  141469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.083783  141469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:08.083894  141469 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:08.084015  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:08.084027  141469 addons.go:69] Setting metrics-server=true in profile "embed-certs-607268"
	I1212 01:08:08.084045  141469 addons.go:234] Setting addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:08.084014  141469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-607268"
	I1212 01:08:08.084054  141469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-607268"
	I1212 01:08:08.084083  141469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-607268"
	I1212 01:08:08.084085  141469 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-607268"
	W1212 01:08:08.084130  141469 addons.go:243] addon storage-provisioner should already be in state true
	W1212 01:08:08.084057  141469 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084618  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084658  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084671  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084684  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084617  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084756  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.085205  141469 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:08.086529  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:08.104090  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I1212 01:08:08.104115  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I1212 01:08:08.104092  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1212 01:08:08.104662  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104701  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104785  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105323  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105329  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105337  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105382  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105696  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105718  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105700  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.106132  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106163  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.106364  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.106599  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106626  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.110390  141469 addons.go:234] Setting addon default-storageclass=true in "embed-certs-607268"
	W1212 01:08:08.110415  141469 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:08.110447  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.110811  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.110844  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.124380  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1212 01:08:08.124888  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.125447  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.125472  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.125764  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.125966  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.126885  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1212 01:08:08.127417  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.127718  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1212 01:08:08.127911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.127990  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128002  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.128161  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.128338  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.128541  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.128612  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128626  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.129037  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.129640  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.129678  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.129905  141469 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:08.131337  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:08.131367  141469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:08.131387  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.131816  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.133335  141469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:08.134372  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.134696  141469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.134714  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:08.134734  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.134851  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.134868  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.135026  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.135247  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.135405  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.135549  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.137253  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137705  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.137725  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137810  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.137911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.138065  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.138162  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.146888  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I1212 01:08:08.147344  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.147919  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.147937  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.148241  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.148418  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.150018  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.150282  141469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.150299  141469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:08.150318  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.152881  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153311  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.153327  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.153344  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153509  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.153634  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.153816  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.301991  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:08.323794  141469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338205  141469 node_ready.go:49] node "embed-certs-607268" has status "Ready":"True"
	I1212 01:08:08.338241  141469 node_ready.go:38] duration metric: took 14.401624ms for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338255  141469 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:08.355801  141469 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:08.406624  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:08.406648  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:08.409497  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.456893  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:08.456917  141469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:08.554996  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.558767  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.558793  141469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:08.614574  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.702483  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702513  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.702818  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.702883  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.702894  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.702904  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702912  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.703142  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.703186  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.703163  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.714426  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.714450  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.714840  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.714857  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.821732  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266688284s)
	I1212 01:08:09.821807  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.821824  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822160  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822185  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.822211  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.822225  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822487  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.822518  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822535  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842157  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.227536232s)
	I1212 01:08:09.842222  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842237  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.842627  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.842663  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.842672  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842679  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842687  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.843002  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.843013  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.843028  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.843046  141469 addons.go:475] Verifying addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:09.844532  141469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:08.098516  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:10.596197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:09.845721  141469 addons.go:510] duration metric: took 1.761839241s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:10.400164  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:12.862616  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:14.362448  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.362473  141469 pod_ready.go:82] duration metric: took 6.006632075s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.362486  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868198  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.868220  141469 pod_ready.go:82] duration metric: took 505.72656ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868231  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872557  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.872582  141469 pod_ready.go:82] duration metric: took 4.343797ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872599  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876837  141469 pod_ready.go:93] pod "kube-proxy-6hw4b" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.876858  141469 pod_ready.go:82] duration metric: took 4.251529ms for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876867  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881467  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.881487  141469 pod_ready.go:82] duration metric: took 4.612567ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881496  141469 pod_ready.go:39] duration metric: took 6.543228562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:14.881516  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:14.881571  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.898899  141469 api_server.go:72] duration metric: took 6.815070313s to wait for apiserver process to appear ...
	I1212 01:08:14.898942  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:14.898963  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:08:14.904555  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:08:14.905738  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:14.905762  141469 api_server.go:131] duration metric: took 6.812513ms to wait for apiserver health ...
	I1212 01:08:14.905771  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:14.964381  141469 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:14.964413  141469 system_pods.go:61] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:14.964418  141469 system_pods.go:61] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:14.964422  141469 system_pods.go:61] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:14.964426  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:14.964429  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:14.964432  141469 system_pods.go:61] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:14.964435  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:14.964441  141469 system_pods.go:61] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:14.964447  141469 system_pods.go:61] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:14.964460  141469 system_pods.go:74] duration metric: took 58.68072ms to wait for pod list to return data ...
	I1212 01:08:14.964476  141469 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:15.161106  141469 default_sa.go:45] found service account: "default"
	I1212 01:08:15.161137  141469 default_sa.go:55] duration metric: took 196.651344ms for default service account to be created ...
	I1212 01:08:15.161147  141469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:15.363429  141469 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:15.363457  141469 system_pods.go:89] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:15.363462  141469 system_pods.go:89] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:15.363466  141469 system_pods.go:89] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:15.363470  141469 system_pods.go:89] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:15.363473  141469 system_pods.go:89] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:15.363477  141469 system_pods.go:89] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:15.363480  141469 system_pods.go:89] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:15.363487  141469 system_pods.go:89] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:15.363492  141469 system_pods.go:89] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:15.363501  141469 system_pods.go:126] duration metric: took 202.347796ms to wait for k8s-apps to be running ...
	I1212 01:08:15.363508  141469 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:15.363553  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:15.378498  141469 system_svc.go:56] duration metric: took 14.977368ms WaitForService to wait for kubelet
	I1212 01:08:15.378527  141469 kubeadm.go:582] duration metric: took 7.294704666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:15.378545  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:15.561384  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:15.561408  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:15.561422  141469 node_conditions.go:105] duration metric: took 182.869791ms to run NodePressure ...
	I1212 01:08:15.561435  141469 start.go:241] waiting for startup goroutines ...
	I1212 01:08:15.561442  141469 start.go:246] waiting for cluster config update ...
	I1212 01:08:15.561453  141469 start.go:255] writing updated cluster config ...
	I1212 01:08:15.561693  141469 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:15.615106  141469 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:15.617073  141469 out.go:177] * Done! kubectl is now configured to use "embed-certs-607268" cluster and "default" namespace by default
	I1212 01:08:14.771660  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.434092304s)
	I1212 01:08:14.771750  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:14.802721  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:08:14.813349  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:08:14.826608  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:08:14.826637  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:08:14.826693  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:08:14.842985  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:08:14.843060  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:08:14.855326  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:08:14.872371  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:08:14.872449  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:08:14.883793  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.894245  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:08:14.894306  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.906163  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:08:14.915821  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:08:14.915867  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:08:14.926019  141884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:08:15.092424  141884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:13.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:15.096259  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:17.596953  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:20.095957  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:22.096970  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:23.562216  141884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:08:23.562302  141884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:08:23.562463  141884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:08:23.562655  141884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:08:23.562786  141884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:08:23.562870  141884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:08:23.564412  141884 out.go:235]   - Generating certificates and keys ...
	I1212 01:08:23.564519  141884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:08:23.564605  141884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:08:23.564718  141884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:08:23.564802  141884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:08:23.564879  141884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:08:23.564925  141884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:08:23.565011  141884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:08:23.565110  141884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:08:23.565230  141884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:08:23.565352  141884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:08:23.565393  141884 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:08:23.565439  141884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:08:23.565485  141884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:08:23.565537  141884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:08:23.565582  141884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:08:23.565636  141884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:08:23.565700  141884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:08:23.565786  141884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:08:23.565885  141884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:08:23.567104  141884 out.go:235]   - Booting up control plane ...
	I1212 01:08:23.567195  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:08:23.567267  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:08:23.567353  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:08:23.567472  141884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:08:23.567579  141884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:08:23.567662  141884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:08:23.567812  141884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:08:23.567953  141884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:08:23.568010  141884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001996966s
	I1212 01:08:23.568071  141884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:08:23.568125  141884 kubeadm.go:310] [api-check] The API server is healthy after 5.001946459s
	I1212 01:08:23.568266  141884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:23.568424  141884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:23.568510  141884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:23.568702  141884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-076578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:23.568789  141884 kubeadm.go:310] [bootstrap-token] Using token: 472xql.x3zqihc9l5oj308m
	I1212 01:08:23.570095  141884 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:23.570226  141884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:23.570353  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:23.570550  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:23.570719  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:23.570880  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:23.571006  141884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:23.571186  141884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:23.571245  141884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:23.571322  141884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:23.571333  141884 kubeadm.go:310] 
	I1212 01:08:23.571411  141884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:23.571421  141884 kubeadm.go:310] 
	I1212 01:08:23.571530  141884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:23.571551  141884 kubeadm.go:310] 
	I1212 01:08:23.571609  141884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:23.571711  141884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:23.571795  141884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:23.571808  141884 kubeadm.go:310] 
	I1212 01:08:23.571892  141884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:23.571907  141884 kubeadm.go:310] 
	I1212 01:08:23.571985  141884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:23.571992  141884 kubeadm.go:310] 
	I1212 01:08:23.572069  141884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:23.572184  141884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:23.572276  141884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:23.572286  141884 kubeadm.go:310] 
	I1212 01:08:23.572413  141884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:23.572516  141884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:23.572525  141884 kubeadm.go:310] 
	I1212 01:08:23.572656  141884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.572805  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:23.572847  141884 kubeadm.go:310] 	--control-plane 
	I1212 01:08:23.572856  141884 kubeadm.go:310] 
	I1212 01:08:23.572973  141884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:23.572991  141884 kubeadm.go:310] 
	I1212 01:08:23.573107  141884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.573248  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:23.573273  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:08:23.573283  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:23.574736  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:23.575866  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:23.590133  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:23.613644  141884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:23.613737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:23.613759  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-076578 minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=default-k8s-diff-port-076578 minikube.k8s.io/primary=true
	I1212 01:08:23.642646  141884 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:23.831478  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.331749  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.832158  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.331630  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.831737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:26.331787  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.597126  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:27.095607  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:26.831860  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.331748  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.448891  141884 kubeadm.go:1113] duration metric: took 3.835231667s to wait for elevateKubeSystemPrivileges
	I1212 01:08:27.448930  141884 kubeadm.go:394] duration metric: took 5m2.053707834s to StartCluster
	I1212 01:08:27.448957  141884 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.449060  141884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:27.450918  141884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.451183  141884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:27.451263  141884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:27.451385  141884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451409  141884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451417  141884 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:08:27.451413  141884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451449  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:27.451454  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451465  141884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-076578"
	I1212 01:08:27.451423  141884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451570  141884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451586  141884 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:27.451648  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451876  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451905  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451927  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.451942  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452055  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.452096  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452939  141884 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:27.454521  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:27.467512  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1212 01:08:27.467541  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1212 01:08:27.467581  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1212 01:08:27.468032  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468069  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468039  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468580  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468592  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468604  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468609  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468620  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468635  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468968  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.469191  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.469562  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469579  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469613  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.469623  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.472898  141884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.472925  141884 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:27.472956  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.473340  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.473389  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.485014  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I1212 01:08:27.485438  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.486058  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.486077  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.486629  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.486832  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.487060  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1212 01:08:27.487779  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.488503  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.488527  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.488910  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.489132  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.489304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.489892  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1212 01:08:27.490599  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.490758  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.491213  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.491236  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.491385  141884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:27.491606  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.492230  141884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:27.492375  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.492420  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.493368  141884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.493382  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:27.493397  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.493462  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:27.493468  141884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:27.493481  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.496807  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497273  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.497304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497474  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.497647  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.497691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497771  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.497922  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.498178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.498190  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.498288  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.498467  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.498634  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.498779  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.512025  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1212 01:08:27.512490  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.513168  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.513187  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.513474  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.513664  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.514930  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.515106  141884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.515119  141884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:27.515131  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.520051  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520084  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.520183  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520419  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.520574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.520737  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.520828  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.692448  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:27.712214  141884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724269  141884 node_ready.go:49] node "default-k8s-diff-port-076578" has status "Ready":"True"
	I1212 01:08:27.724301  141884 node_ready.go:38] duration metric: took 12.044784ms for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724313  141884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:27.729135  141884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:27.768566  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:27.768596  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:27.782958  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.797167  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:27.797190  141884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:27.828960  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:27.828983  141884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:27.871251  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.883614  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:28.198044  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198090  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198457  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198510  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198522  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.198532  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198544  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198817  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198815  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198844  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.277379  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.277405  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.277719  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.277741  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955418  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084128053s)
	I1212 01:08:28.955472  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955561  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071904294s)
	I1212 01:08:28.955624  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955646  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955856  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.955874  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955881  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955888  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.957731  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957740  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957748  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957761  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957802  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957814  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957823  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.957836  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.958072  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.958090  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.958100  141884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-076578"
	I1212 01:08:28.959879  141884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:28.961027  141884 addons.go:510] duration metric: took 1.509771178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:29.241061  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:29.241090  141884 pod_ready.go:82] duration metric: took 1.511925292s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:29.241106  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:31.247610  141884 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:29.095906  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:31.593942  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:33.246910  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.246933  141884 pod_ready.go:82] duration metric: took 4.005818542s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.246944  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753325  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.753350  141884 pod_ready.go:82] duration metric: took 506.39921ms for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753360  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758733  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.758759  141884 pod_ready.go:82] duration metric: took 5.391762ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758769  141884 pod_ready.go:39] duration metric: took 6.034446537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:33.758789  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:33.758854  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:33.774952  141884 api_server.go:72] duration metric: took 6.323732468s to wait for apiserver process to appear ...
	I1212 01:08:33.774976  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:33.774995  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:08:33.780463  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:08:33.781364  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:33.781387  141884 api_server.go:131] duration metric: took 6.404187ms to wait for apiserver health ...
	I1212 01:08:33.781396  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:33.786570  141884 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:33.786591  141884 system_pods.go:61] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.786596  141884 system_pods.go:61] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.786599  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.786603  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.786606  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.786610  141884 system_pods.go:61] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.786615  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.786623  141884 system_pods.go:61] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.786630  141884 system_pods.go:61] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.786643  141884 system_pods.go:74] duration metric: took 5.239236ms to wait for pod list to return data ...
	I1212 01:08:33.786655  141884 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:33.789776  141884 default_sa.go:45] found service account: "default"
	I1212 01:08:33.789794  141884 default_sa.go:55] duration metric: took 3.13371ms for default service account to be created ...
	I1212 01:08:33.789801  141884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:33.794118  141884 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:33.794139  141884 system_pods.go:89] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.794145  141884 system_pods.go:89] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.794149  141884 system_pods.go:89] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.794154  141884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.794157  141884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.794161  141884 system_pods.go:89] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.794165  141884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.794170  141884 system_pods.go:89] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.794177  141884 system_pods.go:89] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.794185  141884 system_pods.go:126] duration metric: took 4.378791ms to wait for k8s-apps to be running ...
	I1212 01:08:33.794194  141884 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:33.794233  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:33.809257  141884 system_svc.go:56] duration metric: took 15.051528ms WaitForService to wait for kubelet
	I1212 01:08:33.809290  141884 kubeadm.go:582] duration metric: took 6.358073584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:33.809323  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:33.813154  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:33.813174  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:33.813183  141884 node_conditions.go:105] duration metric: took 3.85493ms to run NodePressure ...
	I1212 01:08:33.813194  141884 start.go:241] waiting for startup goroutines ...
	I1212 01:08:33.813200  141884 start.go:246] waiting for cluster config update ...
	I1212 01:08:33.813210  141884 start.go:255] writing updated cluster config ...
	I1212 01:08:33.813474  141884 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:33.862511  141884 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:33.864367  141884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-076578" cluster and "default" namespace by default
	I1212 01:08:33.594621  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:34.589133  141411 pod_ready.go:82] duration metric: took 4m0.000384717s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	E1212 01:08:34.589166  141411 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:08:34.589184  141411 pod_ready.go:39] duration metric: took 4m8.190648334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:34.589214  141411 kubeadm.go:597] duration metric: took 4m15.984656847s to restartPrimaryControlPlane
	W1212 01:08:34.589299  141411 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:08:34.589327  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:00.919650  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.330292422s)
	I1212 01:09:00.919762  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:00.956649  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:09:00.976311  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:00.999339  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:00.999364  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:00.999413  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:01.013048  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:01.013112  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:01.027407  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:01.036801  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:01.036854  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:01.046865  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.056325  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:01.056390  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.066574  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:01.078080  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:01.078130  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:01.088810  141411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:01.249481  141411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:09.318633  141411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:09:09.318694  141411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:09:09.318789  141411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:09:09.318924  141411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:09:09.319074  141411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:09:09.319185  141411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:09:09.320615  141411 out.go:235]   - Generating certificates and keys ...
	I1212 01:09:09.320710  141411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:09:09.320803  141411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:09:09.320886  141411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:09:09.320957  141411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:09:09.321061  141411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:09:09.321118  141411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:09:09.321188  141411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:09:09.321249  141411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:09:09.321334  141411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:09:09.321442  141411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:09:09.321516  141411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:09:09.321611  141411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:09:09.321698  141411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:09:09.321775  141411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:09:09.321849  141411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:09:09.321924  141411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:09:09.321973  141411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:09:09.322099  141411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:09:09.322204  141411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:09:09.323661  141411 out.go:235]   - Booting up control plane ...
	I1212 01:09:09.323780  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:09:09.323864  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:09:09.323950  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:09:09.324082  141411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:09:09.324181  141411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:09:09.324255  141411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:09:09.324431  141411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:09:09.324571  141411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:09:09.324647  141411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.39943ms
	I1212 01:09:09.324730  141411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:09:09.324780  141411 kubeadm.go:310] [api-check] The API server is healthy after 5.001520724s
	I1212 01:09:09.324876  141411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:09:09.325036  141411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:09:09.325136  141411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:09:09.325337  141411 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-242725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:09:09.325401  141411 kubeadm.go:310] [bootstrap-token] Using token: k8uf20.0v0t2d7mhtmwxurz
	I1212 01:09:09.326715  141411 out.go:235]   - Configuring RBAC rules ...
	I1212 01:09:09.326840  141411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:09:09.326938  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:09:09.327149  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:09:09.327329  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:09:09.327498  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:09:09.327643  141411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:09:09.327787  141411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:09:09.327852  141411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:09:09.327926  141411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:09:09.327935  141411 kubeadm.go:310] 
	I1212 01:09:09.328027  141411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:09:09.328036  141411 kubeadm.go:310] 
	I1212 01:09:09.328138  141411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:09:09.328148  141411 kubeadm.go:310] 
	I1212 01:09:09.328183  141411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:09:09.328253  141411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:09:09.328302  141411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:09:09.328308  141411 kubeadm.go:310] 
	I1212 01:09:09.328396  141411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:09:09.328413  141411 kubeadm.go:310] 
	I1212 01:09:09.328478  141411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:09:09.328489  141411 kubeadm.go:310] 
	I1212 01:09:09.328554  141411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:09:09.328643  141411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:09:09.328719  141411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:09:09.328727  141411 kubeadm.go:310] 
	I1212 01:09:09.328797  141411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:09:09.328885  141411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:09:09.328894  141411 kubeadm.go:310] 
	I1212 01:09:09.328997  141411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329096  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:09:09.329120  141411 kubeadm.go:310] 	--control-plane 
	I1212 01:09:09.329126  141411 kubeadm.go:310] 
	I1212 01:09:09.329201  141411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:09:09.329209  141411 kubeadm.go:310] 
	I1212 01:09:09.329276  141411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329374  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:09:09.329386  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:09:09.329393  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:09:09.330870  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:09:09.332191  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:09:09.345593  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:09:09.366177  141411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:09:09.366234  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:09.366252  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-242725 minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=no-preload-242725 minikube.k8s.io/primary=true
	I1212 01:09:09.589709  141411 ops.go:34] apiserver oom_adj: -16
	I1212 01:09:09.589889  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.090703  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.590697  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.090698  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.590027  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.090413  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.590626  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.090322  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.590174  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.090032  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.233581  141411 kubeadm.go:1113] duration metric: took 4.867404479s to wait for elevateKubeSystemPrivileges
	I1212 01:09:14.233636  141411 kubeadm.go:394] duration metric: took 4m55.678870659s to StartCluster
	I1212 01:09:14.233674  141411 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.233790  141411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:09:14.236087  141411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.236385  141411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:09:14.236460  141411 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:09:14.236567  141411 addons.go:69] Setting storage-provisioner=true in profile "no-preload-242725"
	I1212 01:09:14.236583  141411 addons.go:69] Setting default-storageclass=true in profile "no-preload-242725"
	I1212 01:09:14.236610  141411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-242725"
	I1212 01:09:14.236611  141411 addons.go:69] Setting metrics-server=true in profile "no-preload-242725"
	I1212 01:09:14.236631  141411 addons.go:234] Setting addon metrics-server=true in "no-preload-242725"
	W1212 01:09:14.236646  141411 addons.go:243] addon metrics-server should already be in state true
	I1212 01:09:14.236682  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.236588  141411 addons.go:234] Setting addon storage-provisioner=true in "no-preload-242725"
	I1212 01:09:14.236687  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1212 01:09:14.236712  141411 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:09:14.236838  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.237093  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237141  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237185  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237101  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237227  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237235  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237863  141411 out.go:177] * Verifying Kubernetes components...
	I1212 01:09:14.239284  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:09:14.254182  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1212 01:09:14.254405  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I1212 01:09:14.254418  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 01:09:14.254742  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254857  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254874  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255388  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255415  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255439  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255803  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255814  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255807  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.256218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.256360  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256396  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.256524  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256567  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.259313  141411 addons.go:234] Setting addon default-storageclass=true in "no-preload-242725"
	W1212 01:09:14.259330  141411 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:09:14.259357  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.259575  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.259621  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.273148  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1212 01:09:14.273601  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.273909  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1212 01:09:14.274174  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274200  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274282  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.274560  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.274785  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274801  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274866  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.275126  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.275280  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.276840  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.277013  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.278945  141411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:09:14.279016  141411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.280219  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:09:14.280239  141411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:09:14.280268  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.280440  141411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.280450  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:09:14.280464  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.281368  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I1212 01:09:14.282054  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.282652  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.282673  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.283314  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.283947  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.283990  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.284230  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284232  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284802  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.284830  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285052  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285088  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.285106  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285247  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285458  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285483  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285619  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285624  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.285761  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285880  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.323872  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I1212 01:09:14.324336  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.324884  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.324906  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.325248  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.325437  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.326991  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.327217  141411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.327237  141411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:09:14.327258  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.330291  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.330895  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.330910  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.330926  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.331062  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.331219  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.331343  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.411182  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:09:14.454298  141411 node_ready.go:35] waiting up to 6m0s for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467328  141411 node_ready.go:49] node "no-preload-242725" has status "Ready":"True"
	I1212 01:09:14.467349  141411 node_ready.go:38] duration metric: took 13.017274ms for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467359  141411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:14.482865  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:14.557685  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.594366  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.602730  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:09:14.602760  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:09:14.666446  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:09:14.666474  141411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:09:14.746040  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.746075  141411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:09:14.799479  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.862653  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.862688  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863687  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.863706  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.863721  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.863730  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863740  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:14.863988  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.864007  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878604  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.878630  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.878903  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.878944  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878914  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.914665  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320255607s)
	I1212 01:09:15.914726  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.914741  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915158  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.915204  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915219  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:15.915236  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.915249  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915499  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915528  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.106582  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.307047373s)
	I1212 01:09:16.106635  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.106652  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107000  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107020  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107030  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.107037  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107298  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107317  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107328  141411 addons.go:475] Verifying addon metrics-server=true in "no-preload-242725"
	I1212 01:09:16.107305  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:16.108981  141411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:09:16.110608  141411 addons.go:510] duration metric: took 1.874161814s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:09:16.498983  141411 pod_ready.go:103] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:09:16.989762  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:16.989784  141411 pod_ready.go:82] duration metric: took 2.506893862s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:16.989795  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996560  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:17.996582  141411 pod_ready.go:82] duration metric: took 1.00678165s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996593  141411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002275  141411 pod_ready.go:93] pod "etcd-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.002294  141411 pod_ready.go:82] duration metric: took 5.694407ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002308  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006942  141411 pod_ready.go:93] pod "kube-apiserver-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.006965  141411 pod_ready.go:82] duration metric: took 4.650802ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006978  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011581  141411 pod_ready.go:93] pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.011621  141411 pod_ready.go:82] duration metric: took 4.634646ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011634  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187112  141411 pod_ready.go:93] pod "kube-proxy-5kc2s" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.187143  141411 pod_ready.go:82] duration metric: took 175.498685ms for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187156  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.586974  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.587003  141411 pod_ready.go:82] duration metric: took 399.836187ms for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.587012  141411 pod_ready.go:39] duration metric: took 4.119642837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:18.587032  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:09:18.587091  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:18.603406  141411 api_server.go:72] duration metric: took 4.366985373s to wait for apiserver process to appear ...
	I1212 01:09:18.603446  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:09:18.603473  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:09:18.609003  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:09:18.609950  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:09:18.609968  141411 api_server.go:131] duration metric: took 6.513408ms to wait for apiserver health ...
	I1212 01:09:18.609976  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:09:18.790460  141411 system_pods.go:59] 9 kube-system pods found
	I1212 01:09:18.790494  141411 system_pods.go:61] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:18.790502  141411 system_pods.go:61] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:18.790507  141411 system_pods.go:61] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:18.790510  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:18.790515  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:18.790520  141411 system_pods.go:61] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:18.790525  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:18.790534  141411 system_pods.go:61] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:18.790540  141411 system_pods.go:61] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:18.790556  141411 system_pods.go:74] duration metric: took 180.570066ms to wait for pod list to return data ...
	I1212 01:09:18.790566  141411 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:09:18.987130  141411 default_sa.go:45] found service account: "default"
	I1212 01:09:18.987172  141411 default_sa.go:55] duration metric: took 196.594497ms for default service account to be created ...
	I1212 01:09:18.987185  141411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:09:19.189233  141411 system_pods.go:86] 9 kube-system pods found
	I1212 01:09:19.189262  141411 system_pods.go:89] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:19.189267  141411 system_pods.go:89] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:19.189271  141411 system_pods.go:89] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:19.189274  141411 system_pods.go:89] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:19.189290  141411 system_pods.go:89] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:19.189294  141411 system_pods.go:89] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:19.189300  141411 system_pods.go:89] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:19.189308  141411 system_pods.go:89] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:19.189318  141411 system_pods.go:89] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:19.189331  141411 system_pods.go:126] duration metric: took 202.137957ms to wait for k8s-apps to be running ...
	I1212 01:09:19.189341  141411 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:09:19.189391  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:19.204241  141411 system_svc.go:56] duration metric: took 14.889522ms WaitForService to wait for kubelet
	I1212 01:09:19.204272  141411 kubeadm.go:582] duration metric: took 4.967858935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:09:19.204289  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:09:19.387735  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:09:19.387760  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:09:19.387768  141411 node_conditions.go:105] duration metric: took 183.47486ms to run NodePressure ...
	I1212 01:09:19.387780  141411 start.go:241] waiting for startup goroutines ...
	I1212 01:09:19.387787  141411 start.go:246] waiting for cluster config update ...
	I1212 01:09:19.387796  141411 start.go:255] writing updated cluster config ...
	I1212 01:09:19.388041  141411 ssh_runner.go:195] Run: rm -f paused
	I1212 01:09:19.437923  141411 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:09:19.439913  141411 out.go:177] * Done! kubectl is now configured to use "no-preload-242725" cluster and "default" namespace by default
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 
	
	
	==> CRI-O <==
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.466265298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966301466238427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2782a4f3-c437-45a7-b730-2b6600da7e5c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.466745955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb871280-fcc7-4ca6-8d67-bc4bc2a49921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.466801044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb871280-fcc7-4ca6-8d67-bc4bc2a49921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.467048671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb871280-fcc7-4ca6-8d67-bc4bc2a49921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.516494292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01b24060-a2a6-4244-8ec5-460c3cdc238d name=/runtime.v1.RuntimeService/Version
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.516569068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01b24060-a2a6-4244-8ec5-460c3cdc238d name=/runtime.v1.RuntimeService/Version
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.518124516Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f4c3a0da-dd55-4f20-8f81-c0bd7abd5850 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.518464766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966301518437130,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f4c3a0da-dd55-4f20-8f81-c0bd7abd5850 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.519320476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe4e95c4-51a6-4cb5-891f-bf8ba2636635 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.519370628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe4e95c4-51a6-4cb5-891f-bf8ba2636635 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.519849850Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe4e95c4-51a6-4cb5-891f-bf8ba2636635 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.564155428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ccded918-63f5-4088-9df8-e88a6038e69f name=/runtime.v1.RuntimeService/Version
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.564227544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ccded918-63f5-4088-9df8-e88a6038e69f name=/runtime.v1.RuntimeService/Version
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.565650464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcf99273-acff-4c60-b781-2824a0520e11 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.566131846Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966301565989528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcf99273-acff-4c60-b781-2824a0520e11 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.567295910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6c64018-721a-4742-a9bd-2ced9a64fc70 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.567350656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6c64018-721a-4742-a9bd-2ced9a64fc70 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.567656137Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6c64018-721a-4742-a9bd-2ced9a64fc70 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.605242472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1dd6faeb-8a72-4f13-a6b7-f7cff141adbc name=/runtime.v1.RuntimeService/Version
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.605326312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1dd6faeb-8a72-4f13-a6b7-f7cff141adbc name=/runtime.v1.RuntimeService/Version
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.606782869Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc856a10-6128-464d-97a8-149bbe7b119e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.607177654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966301607154322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc856a10-6128-464d-97a8-149bbe7b119e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.607982784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d320a8a-a486-4881-bb96-1e43a5e33f65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.608250799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d320a8a-a486-4881-bb96-1e43a5e33f65 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:18:21 no-preload-242725 crio[713]: time="2024-12-12 01:18:21.608446173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d320a8a-a486-4881-bb96-1e43a5e33f65 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff7827fa37f22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7026f323931fe       storage-provisioner
	60b32269243ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   66c6b1125b941       coredns-7c65d6cfc9-tflp9
	a1789431d93d4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   5042075143d40       coredns-7c65d6cfc9-kv2c6
	a94f000b28034       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   d725928490c8d       kube-proxy-5kc2s
	fbdb347f38c02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   a3676ecf3bb15       etcd-no-preload-242725
	11444b2efab69       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   cd5a87dc431f1       kube-scheduler-no-preload-242725
	dd1e0c805d800       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   a466fb0c25517       kube-apiserver-no-preload-242725
	ccee5585bfc48       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   982a319e60e77       kube-controller-manager-no-preload-242725
	3d7c8b9818dc2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   128809195d8a8       kube-apiserver-no-preload-242725
	
	
	==> coredns [60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-242725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-242725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=no-preload-242725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 01:09:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-242725
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 01:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 01:14:23 +0000   Thu, 12 Dec 2024 01:09:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 01:14:23 +0000   Thu, 12 Dec 2024 01:09:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 01:14:23 +0000   Thu, 12 Dec 2024 01:09:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 01:14:23 +0000   Thu, 12 Dec 2024 01:09:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.222
	  Hostname:    no-preload-242725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d23c4d5b575b461683e971eeb726b8b7
	  System UUID:                d23c4d5b-575b-4616-83e9-71eeb726b8b7
	  Boot ID:                    65fa1cdf-a3ab-41b8-8a92-f83d8d596f20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-kv2c6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 coredns-7c65d6cfc9-tflp9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 etcd-no-preload-242725                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-no-preload-242725             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-no-preload-242725    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-5kc2s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-no-preload-242725             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-m2g6s              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s (x8 over 9m19s)  kubelet          Node no-preload-242725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x8 over 9m19s)  kubelet          Node no-preload-242725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x7 over 9m19s)  kubelet          Node no-preload-242725 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s                  kubelet          Node no-preload-242725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s                  kubelet          Node no-preload-242725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s                  kubelet          Node no-preload-242725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m8s                   node-controller  Node no-preload-242725 event: Registered Node no-preload-242725 in Controller
	
	
	==> dmesg <==
	[  +0.046231] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.228118] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.003258] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.735585] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 01:04] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.060708] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055781] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.204319] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.120468] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.313452] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[ +16.117643] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.061381] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.182738] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +6.226683] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.699152] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.593800] kauditd_printk_skb: 23 callbacks suppressed
	[Dec12 01:09] systemd-fstab-generator[3124]: Ignoring "noauto" option for root device
	[  +0.060978] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.029473] systemd-fstab-generator[3454]: Ignoring "noauto" option for root device
	[  +0.088838] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.796874] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +0.101045] kauditd_printk_skb: 12 callbacks suppressed
	[Dec12 01:10] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17] <==
	{"level":"info","ts":"2024-12-12T01:09:04.011740Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-12T01:09:04.011861Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.222:2380"}
	{"level":"info","ts":"2024-12-12T01:09:04.011974Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.222:2380"}
	{"level":"info","ts":"2024-12-12T01:09:04.023221Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-12T01:09:04.023147Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b0f93967598a482b","initial-advertise-peer-urls":["https://192.168.61.222:2380"],"listen-peer-urls":["https://192.168.61.222:2380"],"advertise-client-urls":["https://192.168.61.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-12T01:09:04.636258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-12T01:09:04.636368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-12T01:09:04.636476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b received MsgPreVoteResp from b0f93967598a482b at term 1"}
	{"level":"info","ts":"2024-12-12T01:09:04.636573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b became candidate at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.636600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b received MsgVoteResp from b0f93967598a482b at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.636611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b became leader at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.636700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0f93967598a482b elected leader b0f93967598a482b at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.641410Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.642097Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b0f93967598a482b","local-member-attributes":"{Name:no-preload-242725 ClientURLs:[https://192.168.61.222:2379]}","request-path":"/0/members/b0f93967598a482b/attributes","cluster-id":"d5cdccca781de8ae","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-12T01:09:04.642144Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:09:04.642699Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5cdccca781de8ae","local-member-id":"b0f93967598a482b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.642843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.642937Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.643007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:09:04.652614Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:09:04.653441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.222:2379"}
	{"level":"info","ts":"2024-12-12T01:09:04.655287Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:09:04.655372Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T01:09:04.658926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-12T01:09:04.658887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 01:18:21 up 14 min,  0 users,  load average: 0.08, 0.13, 0.15
	Linux no-preload-242725 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef] <==
	W1212 01:08:59.782931       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.824029       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.848773       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.851410       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.871806       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.933865       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.067223       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.077818       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.115895       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.225945       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.231351       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.320335       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.338336       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.342854       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.360367       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.441033       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.529878       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.571507       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.604291       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.622786       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.657449       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.657642       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.727181       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.755589       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.829326       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61] <==
	E1212 01:14:07.195420       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1212 01:14:07.195454       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:14:07.196635       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:14:07.196697       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:15:07.197255       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:15:07.197351       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:15:07.197390       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:15:07.197449       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:15:07.198608       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:15:07.198654       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:17:07.199826       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:17:07.199952       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:17:07.199837       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:17:07.200128       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:17:07.201448       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:17:07.201488       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6] <==
	E1212 01:13:13.107877       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:13:13.644832       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:13:43.114778       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:13:43.653821       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:14:13.121706       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:14:13.662337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:14:23.890748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-242725"
	E1212 01:14:43.129491       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:14:43.670593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:15:12.698290       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="405.068µs"
	E1212 01:15:13.138520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:15:13.679587       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:15:24.699380       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="349.082µs"
	E1212 01:15:43.145636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:15:43.688264       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:16:13.152634       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:16:13.696006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:16:43.160267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:16:43.703838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:17:13.167484       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:17:13.712284       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:17:43.175620       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:17:43.721185       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:18:13.182966       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:18:13.729480       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1212 01:09:15.212558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1212 01:09:15.229844       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.222"]
	E1212 01:09:15.229960       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:09:15.409282       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 01:09:15.409340       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:09:15.409388       1 server_linux.go:169] "Using iptables Proxier"
	I1212 01:09:15.417948       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:09:15.418280       1 server.go:483] "Version info" version="v1.31.2"
	I1212 01:09:15.418293       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:09:15.424716       1 config.go:199] "Starting service config controller"
	I1212 01:09:15.424753       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1212 01:09:15.424775       1 config.go:105] "Starting endpoint slice config controller"
	I1212 01:09:15.424779       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1212 01:09:15.424812       1 config.go:328] "Starting node config controller"
	I1212 01:09:15.424819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1212 01:09:15.526205       1 shared_informer.go:320] Caches are synced for node config
	I1212 01:09:15.526222       1 shared_informer.go:320] Caches are synced for service config
	I1212 01:09:15.526243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6] <==
	W1212 01:09:06.225418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 01:09:06.225487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:06.226129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:06.225534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:06.226188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:06.225589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 01:09:06.226237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1212 01:09:06.226280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.162979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 01:09:07.163038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.203344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.203458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.229335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.229393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.293461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:09:07.294606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.388638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.389006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.398793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 01:09:07.398857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.429549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 01:09:07.429622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.474912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.474955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1212 01:09:07.819606       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 01:17:17 no-preload-242725 kubelet[3461]: E1212 01:17:17.677747    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:17:18 no-preload-242725 kubelet[3461]: E1212 01:17:18.845981    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966238845422910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:18 no-preload-242725 kubelet[3461]: E1212 01:17:18.846177    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966238845422910,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:28 no-preload-242725 kubelet[3461]: E1212 01:17:28.678309    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:17:28 no-preload-242725 kubelet[3461]: E1212 01:17:28.848001    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966248847766680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:28 no-preload-242725 kubelet[3461]: E1212 01:17:28.848025    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966248847766680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:38 no-preload-242725 kubelet[3461]: E1212 01:17:38.849644    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966258849324698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:38 no-preload-242725 kubelet[3461]: E1212 01:17:38.849942    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966258849324698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:39 no-preload-242725 kubelet[3461]: E1212 01:17:39.676704    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:17:48 no-preload-242725 kubelet[3461]: E1212 01:17:48.852035    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966268851715162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:48 no-preload-242725 kubelet[3461]: E1212 01:17:48.852441    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966268851715162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:53 no-preload-242725 kubelet[3461]: E1212 01:17:53.676918    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:17:58 no-preload-242725 kubelet[3461]: E1212 01:17:58.854121    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966278853657405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:17:58 no-preload-242725 kubelet[3461]: E1212 01:17:58.854145    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966278853657405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:18:04 no-preload-242725 kubelet[3461]: E1212 01:18:04.676280    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]: E1212 01:18:08.743963    3461 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]: E1212 01:18:08.855479    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966288854972096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:18:08 no-preload-242725 kubelet[3461]: E1212 01:18:08.855520    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966288854972096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:18:18 no-preload-242725 kubelet[3461]: E1212 01:18:18.677144    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:18:18 no-preload-242725 kubelet[3461]: E1212 01:18:18.857508    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966298857270087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:18:18 no-preload-242725 kubelet[3461]: E1212 01:18:18.857608    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966298857270087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8] <==
	I1212 01:09:16.436510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 01:09:16.456880       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 01:09:16.456993       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 01:09:16.465438       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 01:09:16.465626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-242725_d7e1b762-b572-4dd7-a67e-47acc0186cfc!
	I1212 01:09:16.467244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58129ccb-db34-4d41-b6ab-c80c5b3f104f", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-242725_d7e1b762-b572-4dd7-a67e-47acc0186cfc became leader
	I1212 01:09:16.566693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-242725_d7e1b762-b572-4dd7-a67e-47acc0186cfc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-242725 -n no-preload-242725
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-242725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-m2g6s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-242725 describe pod metrics-server-6867b74b74-m2g6s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-242725 describe pod metrics-server-6867b74b74-m2g6s: exit status 1 (64.101889ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-m2g6s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-242725 describe pod metrics-server-6867b74b74-m2g6s: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
E1212 01:12:46.618391   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
E1212 01:12:55.697694   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
E1212 01:17:46.618189   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
E1212 01:17:55.697590   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (248.951785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-738445" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (233.563685ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-738445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-738445 logs -n 25: (1.577946796s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-000053 -- sudo                         | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-000053                                 | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:59:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:42.763823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:59:48.839858  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:51.911930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:57.991816  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:01.063931  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:07.143823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:10.215896  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:16.295837  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:19.367812  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:25.447920  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:28.519965  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:34.599875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:37.671930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:43.751927  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:46.823861  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:52.903942  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:55.975967  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:02.055889  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:05.127830  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:11.207862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:14.279940  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:20.359871  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:23.431885  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:29.511831  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:32.583875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:38.663880  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:41.735869  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:47.815810  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:50.887937  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:56.967886  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:00.039916  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:06.119870  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:09.191917  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:15.271841  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:18.343881  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:24.423844  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:27.495936  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:33.575851  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:36.647862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:39.652816  141469 start.go:364] duration metric: took 4m35.142362604s to acquireMachinesLock for "embed-certs-607268"
	I1212 01:02:39.652891  141469 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:39.652902  141469 fix.go:54] fixHost starting: 
	I1212 01:02:39.653292  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:39.653345  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:39.668953  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1212 01:02:39.669389  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:39.669880  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:02:39.669906  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:39.670267  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:39.670428  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:39.670550  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:02:39.671952  141469 fix.go:112] recreateIfNeeded on embed-certs-607268: state=Stopped err=<nil>
	I1212 01:02:39.671994  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	W1212 01:02:39.672154  141469 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:39.677119  141469 out.go:177] * Restarting existing kvm2 VM for "embed-certs-607268" ...
	I1212 01:02:39.650358  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:39.650413  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650700  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:02:39.650731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650949  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:02:39.652672  141411 machine.go:96] duration metric: took 4m37.426998938s to provisionDockerMachine
	I1212 01:02:39.652723  141411 fix.go:56] duration metric: took 4m37.447585389s for fixHost
	I1212 01:02:39.652731  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 4m37.447868317s
	W1212 01:02:39.652756  141411 start.go:714] error starting host: provision: host is not running
	W1212 01:02:39.652909  141411 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1212 01:02:39.652919  141411 start.go:729] Will try again in 5 seconds ...
	I1212 01:02:39.682230  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Start
	I1212 01:02:39.682424  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring networks are active...
	I1212 01:02:39.683293  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network default is active
	I1212 01:02:39.683713  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network mk-embed-certs-607268 is active
	I1212 01:02:39.684046  141469 main.go:141] libmachine: (embed-certs-607268) Getting domain xml...
	I1212 01:02:39.684631  141469 main.go:141] libmachine: (embed-certs-607268) Creating domain...
	I1212 01:02:40.886712  141469 main.go:141] libmachine: (embed-certs-607268) Waiting to get IP...
	I1212 01:02:40.887814  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:40.888208  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:40.888304  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:40.888203  142772 retry.go:31] will retry after 273.835058ms: waiting for machine to come up
	I1212 01:02:41.164102  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.164574  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.164604  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.164545  142772 retry.go:31] will retry after 260.789248ms: waiting for machine to come up
	I1212 01:02:41.427069  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.427486  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.427513  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.427449  142772 retry.go:31] will retry after 330.511025ms: waiting for machine to come up
	I1212 01:02:41.760038  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.760388  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.760434  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.760337  142772 retry.go:31] will retry after 564.656792ms: waiting for machine to come up
	I1212 01:02:42.327037  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.327545  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.327567  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.327505  142772 retry.go:31] will retry after 473.714754ms: waiting for machine to come up
	I1212 01:02:42.803228  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.803607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.803639  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.803548  142772 retry.go:31] will retry after 872.405168ms: waiting for machine to come up
	I1212 01:02:43.677522  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:43.677891  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:43.677919  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:43.677833  142772 retry.go:31] will retry after 1.092518369s: waiting for machine to come up
	I1212 01:02:44.655390  141411 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:02:44.771319  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:44.771721  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:44.771751  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:44.771666  142772 retry.go:31] will retry after 1.147907674s: waiting for machine to come up
	I1212 01:02:45.921165  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:45.921636  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:45.921666  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:45.921589  142772 retry.go:31] will retry after 1.69009103s: waiting for machine to come up
	I1212 01:02:47.614391  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:47.614838  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:47.614863  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:47.614792  142772 retry.go:31] will retry after 1.65610672s: waiting for machine to come up
	I1212 01:02:49.272864  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:49.273312  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:49.273337  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:49.273268  142772 retry.go:31] will retry after 2.50327667s: waiting for machine to come up
	I1212 01:02:51.779671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:51.780077  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:51.780104  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:51.780016  142772 retry.go:31] will retry after 2.808303717s: waiting for machine to come up
	I1212 01:02:54.591866  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:54.592241  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:54.592285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:54.592208  142772 retry.go:31] will retry after 3.689107313s: waiting for machine to come up
	I1212 01:02:58.282552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.282980  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has current primary IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.283005  141469 main.go:141] libmachine: (embed-certs-607268) Found IP for machine: 192.168.50.151
	I1212 01:02:58.283018  141469 main.go:141] libmachine: (embed-certs-607268) Reserving static IP address...
	I1212 01:02:58.283419  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.283441  141469 main.go:141] libmachine: (embed-certs-607268) Reserved static IP address: 192.168.50.151
	I1212 01:02:58.283453  141469 main.go:141] libmachine: (embed-certs-607268) DBG | skip adding static IP to network mk-embed-certs-607268 - found existing host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"}
	I1212 01:02:58.283462  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Getting to WaitForSSH function...
	I1212 01:02:58.283469  141469 main.go:141] libmachine: (embed-certs-607268) Waiting for SSH to be available...
	I1212 01:02:58.285792  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286126  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.286162  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286298  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH client type: external
	I1212 01:02:58.286330  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa (-rw-------)
	I1212 01:02:58.286378  141469 main.go:141] libmachine: (embed-certs-607268) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:02:58.286394  141469 main.go:141] libmachine: (embed-certs-607268) DBG | About to run SSH command:
	I1212 01:02:58.286403  141469 main.go:141] libmachine: (embed-certs-607268) DBG | exit 0
	I1212 01:02:58.407633  141469 main.go:141] libmachine: (embed-certs-607268) DBG | SSH cmd err, output: <nil>: 
	I1212 01:02:58.407985  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetConfigRaw
	I1212 01:02:58.408745  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.411287  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.411642  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411920  141469 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 01:02:58.412117  141469 machine.go:93] provisionDockerMachine start ...
	I1212 01:02:58.412136  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:58.412336  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.414338  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414643  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.414669  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414765  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.414944  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415114  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415259  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.415450  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.415712  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.415724  141469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:02:58.520032  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:02:58.520068  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520312  141469 buildroot.go:166] provisioning hostname "embed-certs-607268"
	I1212 01:02:58.520341  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520539  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.523169  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.523584  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523733  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.523910  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524092  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524252  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.524411  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.524573  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.524584  141469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-607268 && echo "embed-certs-607268" | sudo tee /etc/hostname
	I1212 01:02:58.642175  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-607268
	
	I1212 01:02:58.642232  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.645114  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645480  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.645505  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645698  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.645909  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646063  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646192  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.646334  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.646513  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.646530  141469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-607268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-607268/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-607268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:02:58.758655  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:58.758696  141469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:02:58.758715  141469 buildroot.go:174] setting up certificates
	I1212 01:02:58.758726  141469 provision.go:84] configureAuth start
	I1212 01:02:58.758735  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.759031  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.761749  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762024  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.762052  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762165  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.764356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.764699  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764781  141469 provision.go:143] copyHostCerts
	I1212 01:02:58.764874  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:02:58.764898  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:02:58.764986  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:02:58.765107  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:02:58.765118  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:02:58.765160  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:02:58.765235  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:02:58.765245  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:02:58.765296  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:02:58.765369  141469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-607268 san=[127.0.0.1 192.168.50.151 embed-certs-607268 localhost minikube]
	I1212 01:02:58.890412  141469 provision.go:177] copyRemoteCerts
	I1212 01:02:58.890519  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:02:58.890560  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.892973  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893262  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.893291  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893471  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.893647  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.893761  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.893855  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:58.973652  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:02:58.998097  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:02:59.022028  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:02:59.045859  141469 provision.go:87] duration metric: took 287.094036ms to configureAuth
	I1212 01:02:59.045892  141469 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:02:59.046119  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:02:59.046242  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.048869  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049255  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.049285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049465  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.049642  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049764  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049864  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.049974  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.050181  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.050198  141469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:02:59.276670  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:02:59.276708  141469 machine.go:96] duration metric: took 864.577145ms to provisionDockerMachine
	I1212 01:02:59.276724  141469 start.go:293] postStartSetup for "embed-certs-607268" (driver="kvm2")
	I1212 01:02:59.276747  141469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:02:59.276780  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.277171  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:02:59.277207  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.279974  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280341  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.280387  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280529  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.280738  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.280897  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.281026  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.363091  141469 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:02:59.367476  141469 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:02:59.367503  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:02:59.367618  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:02:59.367749  141469 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:02:59.367844  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:02:59.377895  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:02:59.402410  141469 start.go:296] duration metric: took 125.668908ms for postStartSetup
	I1212 01:02:59.402462  141469 fix.go:56] duration metric: took 19.749561015s for fixHost
	I1212 01:02:59.402485  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.405057  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.405385  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405624  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.405808  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.405974  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.406094  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.406237  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.406423  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.406436  141469 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:02:59.516697  141884 start.go:364] duration metric: took 3m42.810720852s to acquireMachinesLock for "default-k8s-diff-port-076578"
	I1212 01:02:59.516759  141884 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:59.516773  141884 fix.go:54] fixHost starting: 
	I1212 01:02:59.517192  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:59.517241  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:59.533969  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 01:02:59.534367  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:59.534831  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:02:59.534854  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:59.535165  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:59.535362  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:02:59.535499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:02:59.536930  141884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-076578: state=Stopped err=<nil>
	I1212 01:02:59.536951  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	W1212 01:02:59.537093  141884 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:59.538974  141884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-076578" ...
	I1212 01:02:59.516496  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965379.489556963
	
	I1212 01:02:59.516525  141469 fix.go:216] guest clock: 1733965379.489556963
	I1212 01:02:59.516535  141469 fix.go:229] Guest: 2024-12-12 01:02:59.489556963 +0000 UTC Remote: 2024-12-12 01:02:59.40246635 +0000 UTC m=+295.033602018 (delta=87.090613ms)
	I1212 01:02:59.516574  141469 fix.go:200] guest clock delta is within tolerance: 87.090613ms
	I1212 01:02:59.516580  141469 start.go:83] releasing machines lock for "embed-certs-607268", held for 19.863720249s
	I1212 01:02:59.516605  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.516828  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:59.519731  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520075  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.520111  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520202  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520752  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520921  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.521064  141469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:02:59.521131  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.521155  141469 ssh_runner.go:195] Run: cat /version.json
	I1212 01:02:59.521171  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.523724  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.523971  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524036  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524063  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524221  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524374  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524375  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524401  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524553  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.524562  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524719  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524719  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.524837  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.525000  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.628168  141469 ssh_runner.go:195] Run: systemctl --version
	I1212 01:02:59.635800  141469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:02:59.788137  141469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:02:59.795216  141469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:02:59.795289  141469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:02:59.811889  141469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:02:59.811917  141469 start.go:495] detecting cgroup driver to use...
	I1212 01:02:59.811992  141469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:02:59.827154  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:02:59.841138  141469 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:02:59.841193  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:02:59.854874  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:02:59.869250  141469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:02:59.994723  141469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:00.136385  141469 docker.go:233] disabling docker service ...
	I1212 01:03:00.136462  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:00.150966  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:00.163907  141469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:00.340171  141469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:00.480828  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:00.498056  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:00.518273  141469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:00.518339  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.529504  141469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:00.529571  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.540616  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.553419  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.566004  141469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:00.577682  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.589329  141469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.612561  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.625526  141469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:00.635229  141469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:00.635289  141469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:00.657569  141469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:00.669982  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:00.793307  141469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:00.887423  141469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:00.887498  141469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:00.892715  141469 start.go:563] Will wait 60s for crictl version
	I1212 01:03:00.892773  141469 ssh_runner.go:195] Run: which crictl
	I1212 01:03:00.896646  141469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:00.933507  141469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:00.933606  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:00.977011  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:01.008491  141469 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:02:59.540301  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Start
	I1212 01:02:59.540482  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring networks are active...
	I1212 01:02:59.541181  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network default is active
	I1212 01:02:59.541503  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network mk-default-k8s-diff-port-076578 is active
	I1212 01:02:59.541802  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Getting domain xml...
	I1212 01:02:59.542437  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Creating domain...
	I1212 01:03:00.796803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting to get IP...
	I1212 01:03:00.797932  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798386  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798495  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.798404  142917 retry.go:31] will retry after 199.022306ms: waiting for machine to come up
	I1212 01:03:00.999067  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999547  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999572  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.999499  142917 retry.go:31] will retry after 340.093067ms: waiting for machine to come up
	I1212 01:03:01.340839  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341513  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.341437  142917 retry.go:31] will retry after 469.781704ms: waiting for machine to come up
	I1212 01:03:01.009956  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:03:01.012767  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013224  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:03:01.013252  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013471  141469 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:01.017815  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:01.032520  141469 kubeadm.go:883] updating cluster {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:01.032662  141469 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:01.032715  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:01.070406  141469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:01.070478  141469 ssh_runner.go:195] Run: which lz4
	I1212 01:03:01.074840  141469 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:01.079207  141469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:01.079238  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:02.524822  141469 crio.go:462] duration metric: took 1.450020274s to copy over tarball
	I1212 01:03:02.524909  141469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:01.812803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813298  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813335  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.813232  142917 retry.go:31] will retry after 552.327376ms: waiting for machine to come up
	I1212 01:03:02.367682  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368152  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368187  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:02.368106  142917 retry.go:31] will retry after 629.731283ms: waiting for machine to come up
	I1212 01:03:02.999887  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000307  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000339  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.000235  142917 retry.go:31] will retry after 764.700679ms: waiting for machine to come up
	I1212 01:03:03.766389  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766891  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766919  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.766845  142917 retry.go:31] will retry after 920.806371ms: waiting for machine to come up
	I1212 01:03:04.689480  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690029  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:04.689996  142917 retry.go:31] will retry after 1.194297967s: waiting for machine to come up
	I1212 01:03:05.886345  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886729  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886796  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:05.886714  142917 retry.go:31] will retry after 1.60985804s: waiting for machine to come up
	I1212 01:03:04.719665  141469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194717299s)
	I1212 01:03:04.719708  141469 crio.go:469] duration metric: took 2.194851225s to extract the tarball
	I1212 01:03:04.719719  141469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:04.756600  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:04.802801  141469 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:04.802832  141469 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:04.802840  141469 kubeadm.go:934] updating node { 192.168.50.151 8443 v1.31.2 crio true true} ...
	I1212 01:03:04.802949  141469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-607268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:04.803008  141469 ssh_runner.go:195] Run: crio config
	I1212 01:03:04.854778  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:04.854804  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:04.854815  141469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:04.854836  141469 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.151 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-607268 NodeName:embed-certs-607268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:04.854962  141469 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-607268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:04.855023  141469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:04.864877  141469 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:04.864967  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:04.874503  141469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1212 01:03:04.891124  141469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:04.907560  141469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1212 01:03:04.924434  141469 ssh_runner.go:195] Run: grep 192.168.50.151	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:04.928518  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:04.940523  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:05.076750  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:05.094388  141469 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268 for IP: 192.168.50.151
	I1212 01:03:05.094424  141469 certs.go:194] generating shared ca certs ...
	I1212 01:03:05.094440  141469 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:05.094618  141469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:05.094691  141469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:05.094710  141469 certs.go:256] generating profile certs ...
	I1212 01:03:05.094833  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/client.key
	I1212 01:03:05.094916  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key.9253237b
	I1212 01:03:05.094968  141469 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key
	I1212 01:03:05.095131  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:05.095177  141469 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:05.095192  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:05.095224  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:05.095254  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:05.095293  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:05.095359  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:05.096133  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:05.130605  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:05.164694  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:05.206597  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:05.241305  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:03:05.270288  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:05.296137  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:05.320158  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:05.343820  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:05.373277  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:05.397070  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:05.420738  141469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:05.437822  141469 ssh_runner.go:195] Run: openssl version
	I1212 01:03:05.443744  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:05.454523  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459182  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459237  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.465098  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:05.475681  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:05.486396  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490883  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490929  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.496613  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:05.507295  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:05.517980  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522534  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522590  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.528117  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:05.538979  141469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:05.543723  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:05.549611  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:05.555445  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:05.561482  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:05.567221  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:05.573015  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:05.578902  141469 kubeadm.go:392] StartCluster: {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:05.578984  141469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:05.579042  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.619027  141469 cri.go:89] found id: ""
	I1212 01:03:05.619115  141469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:05.629472  141469 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:05.629501  141469 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:05.629567  141469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:05.639516  141469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:05.640491  141469 kubeconfig.go:125] found "embed-certs-607268" server: "https://192.168.50.151:8443"
	I1212 01:03:05.642468  141469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:05.651891  141469 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.151
	I1212 01:03:05.651922  141469 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:05.651934  141469 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:05.651978  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.686414  141469 cri.go:89] found id: ""
	I1212 01:03:05.686501  141469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:05.702724  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:05.712454  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:05.712480  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:05.712531  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:05.721529  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:05.721603  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:05.730897  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:05.739758  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:05.739815  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:05.749089  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.758042  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:05.758104  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.767425  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:05.776195  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:05.776260  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:05.785435  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:05.794795  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:05.918710  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:06.846975  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.072898  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.139677  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.237216  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:07.237336  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:07.738145  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.238219  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.255671  141469 api_server.go:72] duration metric: took 1.018455783s to wait for apiserver process to appear ...
	I1212 01:03:08.255705  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:08.255734  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:08.256408  141469 api_server.go:269] stopped: https://192.168.50.151:8443/healthz: Get "https://192.168.50.151:8443/healthz": dial tcp 192.168.50.151:8443: connect: connection refused
	I1212 01:03:08.756070  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:07.498527  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498942  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498966  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:07.498889  142917 retry.go:31] will retry after 2.278929136s: waiting for machine to come up
	I1212 01:03:09.779321  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779847  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779879  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:09.779793  142917 retry.go:31] will retry after 1.82028305s: waiting for machine to come up
	I1212 01:03:10.630080  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.630121  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.630140  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.674408  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.674470  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.756660  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.763043  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:10.763088  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.256254  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.263457  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.263481  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.756759  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.764019  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.764053  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:12.256627  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:12.262369  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:03:12.270119  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:12.270153  141469 api_server.go:131] duration metric: took 4.014438706s to wait for apiserver health ...
	I1212 01:03:12.270164  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:12.270172  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:12.272148  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:12.273667  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:12.289376  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:12.312620  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:12.323663  141469 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:12.323715  141469 system_pods.go:61] "coredns-7c65d6cfc9-n66x6" [ae2c1ac7-0c17-453d-a05c-70fbf6d81e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:12.323727  141469 system_pods.go:61] "etcd-embed-certs-607268" [811dc3d0-d893-45a0-a5c7-3fee0efd2e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:12.323747  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [11848f2c-215b-4cf4-88f0-93151c55e7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:12.323764  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [4f4066ab-b6e4-4a46-a03b-dda1776c39ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:12.323776  141469 system_pods.go:61] "kube-proxy-9f6lj" [2463030a-d7db-4031-9e26-0a56a9067520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:12.323784  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [c2aeaf4a-7fb8-4bb8-87ea-5401db017fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:12.323795  141469 system_pods.go:61] "metrics-server-6867b74b74-5bms9" [e1a794f9-cf60-486f-a0e8-670dc7dfb4da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:12.323803  141469 system_pods.go:61] "storage-provisioner" [b29860cd-465d-4e70-ad5d-dd17c22ae290] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:12.323820  141469 system_pods.go:74] duration metric: took 11.170811ms to wait for pod list to return data ...
	I1212 01:03:12.323845  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:12.327828  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:12.327863  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:12.327880  141469 node_conditions.go:105] duration metric: took 4.029256ms to run NodePressure ...
	I1212 01:03:12.327902  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:12.638709  141469 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644309  141469 kubeadm.go:739] kubelet initialised
	I1212 01:03:12.644332  141469 kubeadm.go:740] duration metric: took 5.590168ms waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644356  141469 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:12.650768  141469 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:11.601456  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602012  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602044  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:11.601956  142917 retry.go:31] will retry after 2.272258384s: waiting for machine to come up
	I1212 01:03:13.876607  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.876986  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.877024  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:13.876950  142917 retry.go:31] will retry after 4.014936005s: waiting for machine to come up
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:14.657097  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:16.658207  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:17.657933  141469 pod_ready.go:93] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:17.657957  141469 pod_ready.go:82] duration metric: took 5.007165494s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:17.657966  141469 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:17.896127  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896639  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Found IP for machine: 192.168.39.174
	I1212 01:03:17.896659  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserving static IP address...
	I1212 01:03:17.897028  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.897062  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserved static IP address: 192.168.39.174
	I1212 01:03:17.897087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | skip adding static IP to network mk-default-k8s-diff-port-076578 - found existing host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"}
	I1212 01:03:17.897108  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Getting to WaitForSSH function...
	I1212 01:03:17.897126  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for SSH to be available...
	I1212 01:03:17.899355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899727  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.899754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899911  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH client type: external
	I1212 01:03:17.899941  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa (-rw-------)
	I1212 01:03:17.899976  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:17.899989  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | About to run SSH command:
	I1212 01:03:17.900005  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | exit 0
	I1212 01:03:18.036261  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:18.036610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetConfigRaw
	I1212 01:03:18.037352  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.040173  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040570  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.040595  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040866  141884 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 01:03:18.041107  141884 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:18.041134  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.041355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.043609  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.043945  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.043973  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.044142  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.044291  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044466  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.044745  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.044986  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.045002  141884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:18.156161  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:18.156193  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156472  141884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-076578"
	I1212 01:03:18.156499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.159391  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.159871  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.159903  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.160048  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.160244  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160379  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160500  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.160681  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.160898  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.160917  141884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-076578 && echo "default-k8s-diff-port-076578" | sudo tee /etc/hostname
	I1212 01:03:18.285904  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-076578
	
	I1212 01:03:18.285937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.288620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.288987  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.289010  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.289285  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.289491  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289658  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289799  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.289981  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.290190  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.290223  141884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-076578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-076578/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-076578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:18.409683  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:18.409721  141884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:18.409751  141884 buildroot.go:174] setting up certificates
	I1212 01:03:18.409761  141884 provision.go:84] configureAuth start
	I1212 01:03:18.409782  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.410045  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.412393  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412721  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.412756  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.415204  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415502  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.415530  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415663  141884 provision.go:143] copyHostCerts
	I1212 01:03:18.415735  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:18.415757  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:18.415832  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:18.415925  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:18.415933  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:18.415952  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:18.416007  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:18.416015  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:18.416032  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:18.416081  141884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-076578 san=[127.0.0.1 192.168.39.174 default-k8s-diff-port-076578 localhost minikube]
	I1212 01:03:18.502493  141884 provision.go:177] copyRemoteCerts
	I1212 01:03:18.502562  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:18.502594  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.505104  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505377  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.505409  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505568  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.505754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.505892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.506034  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.590425  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:18.616850  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 01:03:18.640168  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:18.664517  141884 provision.go:87] duration metric: took 254.738256ms to configureAuth
	I1212 01:03:18.664542  141884 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:18.664705  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:03:18.664778  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.667425  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.667784  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.667808  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.668004  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.668178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668313  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668448  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.668587  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.668751  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.668772  141884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:18.906880  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:18.906908  141884 machine.go:96] duration metric: took 865.784426ms to provisionDockerMachine
	I1212 01:03:18.906920  141884 start.go:293] postStartSetup for "default-k8s-diff-port-076578" (driver="kvm2")
	I1212 01:03:18.906931  141884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:18.906949  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.907315  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:18.907348  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.909882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910213  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.910242  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910347  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.910542  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.910680  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.910806  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.994819  141884 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:18.998959  141884 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:18.998989  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:18.999069  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:18.999163  141884 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:18.999252  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:19.009226  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:19.032912  141884 start.go:296] duration metric: took 125.973128ms for postStartSetup
	I1212 01:03:19.032960  141884 fix.go:56] duration metric: took 19.516187722s for fixHost
	I1212 01:03:19.032990  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.035623  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.035947  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.035977  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.036151  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.036310  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036438  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036581  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.036738  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:19.036906  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:19.036919  141884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:19.148565  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965399.101726035
	
	I1212 01:03:19.148592  141884 fix.go:216] guest clock: 1733965399.101726035
	I1212 01:03:19.148602  141884 fix.go:229] Guest: 2024-12-12 01:03:19.101726035 +0000 UTC Remote: 2024-12-12 01:03:19.032967067 +0000 UTC m=+242.472137495 (delta=68.758968ms)
	I1212 01:03:19.148628  141884 fix.go:200] guest clock delta is within tolerance: 68.758968ms
	I1212 01:03:19.148635  141884 start.go:83] releasing machines lock for "default-k8s-diff-port-076578", held for 19.631903968s
	I1212 01:03:19.148688  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.149016  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:19.151497  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.151926  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.151954  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.152124  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152598  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152762  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152834  141884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:19.152892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.152952  141884 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:19.152972  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.155620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155694  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.155962  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156057  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.156114  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156123  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156316  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156327  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156469  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156583  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156619  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156826  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.156824  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.268001  141884 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:19.275696  141884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:19.426624  141884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:19.432842  141884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:19.432911  141884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:19.449082  141884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:19.449108  141884 start.go:495] detecting cgroup driver to use...
	I1212 01:03:19.449187  141884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:19.466543  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:19.482668  141884 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:19.482733  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:19.497124  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:19.512626  141884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:19.624948  141884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:19.779469  141884 docker.go:233] disabling docker service ...
	I1212 01:03:19.779545  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:19.794888  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:19.810497  141884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:19.954827  141884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:20.086435  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:20.100917  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:20.120623  141884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:20.120683  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.134353  141884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:20.134431  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.150373  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.165933  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.181524  141884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:20.196891  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.209752  141884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.228990  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.241553  141884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:20.251819  141884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:20.251883  141884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:20.267155  141884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:20.277683  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:20.427608  141884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:20.525699  141884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:20.525804  141884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:20.530984  141884 start.go:563] Will wait 60s for crictl version
	I1212 01:03:20.531055  141884 ssh_runner.go:195] Run: which crictl
	I1212 01:03:20.535013  141884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:20.576177  141884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:20.576251  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.605529  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.638175  141884 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:20.639475  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:20.642566  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643001  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:20.643034  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643196  141884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:20.647715  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:20.662215  141884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:20.662337  141884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:20.662381  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:20.705014  141884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:20.705112  141884 ssh_runner.go:195] Run: which lz4
	I1212 01:03:20.709477  141884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:20.714111  141884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:20.714145  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:19.666527  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:21.666676  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:24.165316  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:22.285157  141884 crio.go:462] duration metric: took 1.575709911s to copy over tarball
	I1212 01:03:22.285258  141884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:24.495814  141884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210502234s)
	I1212 01:03:24.495848  141884 crio.go:469] duration metric: took 2.210655432s to extract the tarball
	I1212 01:03:24.495857  141884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:24.533396  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:24.581392  141884 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:24.581419  141884 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:24.581428  141884 kubeadm.go:934] updating node { 192.168.39.174 8444 v1.31.2 crio true true} ...
	I1212 01:03:24.581524  141884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-076578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:24.581594  141884 ssh_runner.go:195] Run: crio config
	I1212 01:03:24.625042  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:24.625073  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:24.625083  141884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:24.625111  141884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-076578 NodeName:default-k8s-diff-port-076578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:24.625238  141884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-076578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:24.625313  141884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:24.635818  141884 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:24.635903  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:24.645966  141884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:03:24.665547  141884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:24.682639  141884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1212 01:03:24.700147  141884 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:24.704172  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:24.716697  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:24.842374  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:24.860641  141884 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578 for IP: 192.168.39.174
	I1212 01:03:24.860676  141884 certs.go:194] generating shared ca certs ...
	I1212 01:03:24.860700  141884 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:24.860888  141884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:24.860955  141884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:24.860970  141884 certs.go:256] generating profile certs ...
	I1212 01:03:24.861110  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.key
	I1212 01:03:24.861200  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key.4a68806a
	I1212 01:03:24.861251  141884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key
	I1212 01:03:24.861391  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:24.861444  141884 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:24.861458  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:24.861498  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:24.861535  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:24.861565  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:24.861629  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:24.862588  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:24.899764  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:24.950373  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:24.983222  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:25.017208  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 01:03:25.042653  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:25.071358  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:25.097200  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:25.122209  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:25.150544  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:25.181427  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:25.210857  141884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:25.229580  141884 ssh_runner.go:195] Run: openssl version
	I1212 01:03:25.236346  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:25.247510  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252355  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252407  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.258511  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:25.272698  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:25.289098  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295737  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295806  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.304133  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:25.315805  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:25.328327  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333482  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333539  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.339367  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:25.351612  141884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:25.357060  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:25.363452  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:25.369984  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:25.376434  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:25.382895  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:25.389199  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:25.395232  141884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:25.395325  141884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:25.395370  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.439669  141884 cri.go:89] found id: ""
	I1212 01:03:25.439749  141884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:25.453870  141884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:25.453893  141884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:25.453951  141884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:25.464552  141884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:25.465609  141884 kubeconfig.go:125] found "default-k8s-diff-port-076578" server: "https://192.168.39.174:8444"
	I1212 01:03:25.467767  141884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:25.477907  141884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I1212 01:03:25.477943  141884 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:25.477958  141884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:25.478018  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.521891  141884 cri.go:89] found id: ""
	I1212 01:03:25.521978  141884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:25.539029  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:25.549261  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:25.549283  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:25.549341  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:03:25.558948  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:25.559022  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:25.568947  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:03:25.579509  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:25.579614  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:25.589573  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.600434  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:25.600498  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.610337  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:03:25.619956  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:25.620014  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:25.631231  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:25.641366  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:25.761159  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:26.165525  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:28.168457  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.168492  141469 pod_ready.go:82] duration metric: took 10.510517291s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.168506  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175334  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.175361  141469 pod_ready.go:82] duration metric: took 6.84531ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175375  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183060  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.183093  141469 pod_ready.go:82] duration metric: took 7.709158ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183106  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.190999  141469 pod_ready.go:93] pod "kube-proxy-9f6lj" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.191028  141469 pod_ready.go:82] duration metric: took 7.913069ms for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.191040  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199945  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.199972  141469 pod_ready.go:82] duration metric: took 8.923682ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199984  141469 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:26.830271  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069070962s)
	I1212 01:03:26.830326  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.035935  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.113317  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.210226  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:27.210329  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:27.710504  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.211114  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.242967  141884 api_server.go:72] duration metric: took 1.032736901s to wait for apiserver process to appear ...
	I1212 01:03:28.243012  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:28.243038  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:28.243643  141884 api_server.go:269] stopped: https://192.168.39.174:8444/healthz: Get "https://192.168.39.174:8444/healthz": dial tcp 192.168.39.174:8444: connect: connection refused
	I1212 01:03:28.743921  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.546075  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.546113  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.546129  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.621583  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.621619  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.743860  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.750006  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:31.750052  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.243382  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.269990  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.270033  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.743516  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.752979  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.753012  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:33.243571  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:33.247902  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:03:33.253786  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:33.253810  141884 api_server.go:131] duration metric: took 5.010790107s to wait for apiserver health ...
	I1212 01:03:33.253820  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:33.253826  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:33.255762  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:30.208396  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:32.708024  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:33.257007  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:33.267934  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:33.286197  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:33.297934  141884 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:33.297982  141884 system_pods.go:61] "coredns-7c65d6cfc9-xn886" [db1f42f1-93d9-4942-813d-e3de1cc24801] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:33.297995  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [25555578-8169-4986-aa10-06a442152c50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:33.298006  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [1004c64c-91ca-43c3-9c3d-43dab13d3812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:33.298023  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [63d42313-4ea9-44f9-a8eb-b0c6c73424c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:33.298039  141884 system_pods.go:61] "kube-proxy-7frgh" [191ed421-4297-47c7-a46d-407a8eaa0378] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:33.298049  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [1506a505-697c-4b80-b7ef-55de1116fa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:33.298060  141884 system_pods.go:61] "metrics-server-6867b74b74-k9s7n" [806badc0-b609-421f-9203-3fd91212a145] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:33.298077  141884 system_pods.go:61] "storage-provisioner" [bc133673-b7e2-42b2-98ac-e3284c9162ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:33.298090  141884 system_pods.go:74] duration metric: took 11.875762ms to wait for pod list to return data ...
	I1212 01:03:33.298105  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:33.302482  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:33.302517  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:33.302532  141884 node_conditions.go:105] duration metric: took 4.418219ms to run NodePressure ...
	I1212 01:03:33.302555  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:33.728028  141884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735780  141884 kubeadm.go:739] kubelet initialised
	I1212 01:03:33.735810  141884 kubeadm.go:740] duration metric: took 7.738781ms waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735824  141884 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:33.743413  141884 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:35.750012  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.348909  141411 start.go:364] duration metric: took 54.693436928s to acquireMachinesLock for "no-preload-242725"
	I1212 01:03:39.348976  141411 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:39.348990  141411 fix.go:54] fixHost starting: 
	I1212 01:03:39.349442  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:39.349485  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:39.367203  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1212 01:03:39.367584  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:39.368158  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:03:39.368185  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:39.368540  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:39.368717  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:39.368854  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:03:39.370433  141411 fix.go:112] recreateIfNeeded on no-preload-242725: state=Stopped err=<nil>
	I1212 01:03:39.370460  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	W1212 01:03:39.370594  141411 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:39.372621  141411 out.go:177] * Restarting existing kvm2 VM for "no-preload-242725" ...
	I1212 01:03:35.206417  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.208384  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:37.750453  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.752224  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.374093  141411 main.go:141] libmachine: (no-preload-242725) Calling .Start
	I1212 01:03:39.374303  141411 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 01:03:39.375021  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 01:03:39.375456  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 01:03:39.375951  141411 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 01:03:39.376726  141411 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 01:03:40.703754  141411 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 01:03:40.705274  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.705752  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.705821  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.705709  143226 retry.go:31] will retry after 196.576482ms: waiting for machine to come up
	I1212 01:03:40.904341  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.904718  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.904740  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.904669  143226 retry.go:31] will retry after 375.936901ms: waiting for machine to come up
	I1212 01:03:41.282278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.282839  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.282871  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.282793  143226 retry.go:31] will retry after 427.731576ms: waiting for machine to come up
	I1212 01:03:41.712553  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.713198  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.713231  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.713084  143226 retry.go:31] will retry after 421.07445ms: waiting for machine to come up
	I1212 01:03:39.707174  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:41.711103  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.207685  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:41.768335  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.252708  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.754333  141884 pod_ready.go:93] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.754362  141884 pod_ready.go:82] duration metric: took 11.010925207s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.754371  141884 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760121  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.760142  141884 pod_ready.go:82] duration metric: took 5.764171ms for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760151  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765554  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.765575  141884 pod_ready.go:82] duration metric: took 5.417017ms for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765589  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:42.135878  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.136341  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.136367  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.136284  143226 retry.go:31] will retry after 477.81881ms: waiting for machine to come up
	I1212 01:03:42.616400  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.616906  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.616929  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.616858  143226 retry.go:31] will retry after 597.608319ms: waiting for machine to come up
	I1212 01:03:43.215837  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:43.216430  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:43.216454  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:43.216363  143226 retry.go:31] will retry after 1.118837214s: waiting for machine to come up
	I1212 01:03:44.336666  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:44.337229  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:44.337253  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:44.337187  143226 retry.go:31] will retry after 1.008232952s: waiting for machine to come up
	I1212 01:03:45.346868  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:45.347386  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:45.347423  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:45.347307  143226 retry.go:31] will retry after 1.735263207s: waiting for machine to come up
	I1212 01:03:47.084570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:47.084980  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:47.085012  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:47.084931  143226 retry.go:31] will retry after 1.662677797s: waiting for machine to come up
	I1212 01:03:46.208324  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.707694  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:48.069316  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.069362  141884 pod_ready.go:82] duration metric: took 3.303763458s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.069380  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328758  141884 pod_ready.go:93] pod "kube-proxy-7frgh" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.328784  141884 pod_ready.go:82] duration metric: took 259.396178ms for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328798  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337082  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.337106  141884 pod_ready.go:82] duration metric: took 8.298777ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337119  141884 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:50.343458  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.748914  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:48.749510  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:48.749535  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:48.749475  143226 retry.go:31] will retry after 2.670904101s: waiting for machine to come up
	I1212 01:03:51.421499  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:51.421915  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:51.421961  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:51.421862  143226 retry.go:31] will retry after 3.566697123s: waiting for machine to come up
	I1212 01:03:50.708435  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:53.207675  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.344098  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.344166  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:56.345835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.990515  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:54.990916  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:54.990941  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:54.990869  143226 retry.go:31] will retry after 4.288131363s: waiting for machine to come up
	I1212 01:03:55.706167  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:57.707796  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.843944  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.844210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:59.284312  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.284807  141411 main.go:141] libmachine: (no-preload-242725) Found IP for machine: 192.168.61.222
	I1212 01:03:59.284834  141411 main.go:141] libmachine: (no-preload-242725) Reserving static IP address...
	I1212 01:03:59.284851  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has current primary IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.285300  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.285334  141411 main.go:141] libmachine: (no-preload-242725) DBG | skip adding static IP to network mk-no-preload-242725 - found existing host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"}
	I1212 01:03:59.285357  141411 main.go:141] libmachine: (no-preload-242725) Reserved static IP address: 192.168.61.222
	I1212 01:03:59.285376  141411 main.go:141] libmachine: (no-preload-242725) Waiting for SSH to be available...
	I1212 01:03:59.285390  141411 main.go:141] libmachine: (no-preload-242725) DBG | Getting to WaitForSSH function...
	I1212 01:03:59.287532  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287840  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.287869  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287970  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH client type: external
	I1212 01:03:59.287998  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa (-rw-------)
	I1212 01:03:59.288043  141411 main.go:141] libmachine: (no-preload-242725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:59.288066  141411 main.go:141] libmachine: (no-preload-242725) DBG | About to run SSH command:
	I1212 01:03:59.288092  141411 main.go:141] libmachine: (no-preload-242725) DBG | exit 0
	I1212 01:03:59.415723  141411 main.go:141] libmachine: (no-preload-242725) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:59.416104  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 01:03:59.416755  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.419446  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.419848  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.419879  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.420182  141411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 01:03:59.420388  141411 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:59.420412  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:59.420637  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.422922  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423257  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.423278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423432  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.423626  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423787  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423918  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.424051  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.424222  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.424231  141411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:59.536768  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:59.536796  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537016  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:03:59.537042  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537234  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.539806  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540110  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.540141  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540337  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.540509  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540665  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540800  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.540973  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.541155  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.541171  141411 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-242725 && echo "no-preload-242725" | sudo tee /etc/hostname
	I1212 01:03:59.668244  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-242725
	
	I1212 01:03:59.668269  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.671021  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671353  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.671374  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671630  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.671851  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672000  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672160  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.672310  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.672485  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.672502  141411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-242725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-242725/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-242725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:59.792950  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:59.792985  141411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:59.793011  141411 buildroot.go:174] setting up certificates
	I1212 01:03:59.793024  141411 provision.go:84] configureAuth start
	I1212 01:03:59.793041  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.793366  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.796185  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796599  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.796638  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796783  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.799165  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799532  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.799558  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799711  141411 provision.go:143] copyHostCerts
	I1212 01:03:59.799780  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:59.799804  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:59.799869  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:59.800004  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:59.800015  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:59.800051  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:59.800144  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:59.800155  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:59.800182  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:59.800263  141411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.no-preload-242725 san=[127.0.0.1 192.168.61.222 localhost minikube no-preload-242725]
	I1212 01:03:59.987182  141411 provision.go:177] copyRemoteCerts
	I1212 01:03:59.987249  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:59.987290  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.989902  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990285  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.990317  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990520  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.990712  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.990856  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.990981  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.078289  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:00.103149  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:04:00.131107  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:04:00.159076  141411 provision.go:87] duration metric: took 366.034024ms to configureAuth
	I1212 01:04:00.159103  141411 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:04:00.159305  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:04:00.159401  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.162140  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162537  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.162570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162696  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.162864  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163016  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163124  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.163262  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.163436  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.163451  141411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:00.407729  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:00.407758  141411 machine.go:96] duration metric: took 987.35601ms to provisionDockerMachine
	I1212 01:04:00.407773  141411 start.go:293] postStartSetup for "no-preload-242725" (driver="kvm2")
	I1212 01:04:00.407787  141411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:00.407810  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.408186  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:00.408218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.410950  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411329  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.411360  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411585  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.411809  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.411981  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.412115  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.498221  141411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:00.502621  141411 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:04:00.502644  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:04:00.502705  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:04:00.502779  141411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:04:00.502863  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:00.512322  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:00.540201  141411 start.go:296] duration metric: took 132.410555ms for postStartSetup
	I1212 01:04:00.540250  141411 fix.go:56] duration metric: took 21.191260423s for fixHost
	I1212 01:04:00.540287  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.542631  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.542983  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.543011  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.543212  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.543393  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543556  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543702  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.543867  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.544081  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.544095  141411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:04:00.656532  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965440.609922961
	
	I1212 01:04:00.656560  141411 fix.go:216] guest clock: 1733965440.609922961
	I1212 01:04:00.656569  141411 fix.go:229] Guest: 2024-12-12 01:04:00.609922961 +0000 UTC Remote: 2024-12-12 01:04:00.540255801 +0000 UTC m=+358.475944555 (delta=69.66716ms)
	I1212 01:04:00.656597  141411 fix.go:200] guest clock delta is within tolerance: 69.66716ms
	I1212 01:04:00.656616  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 21.307670093s
	I1212 01:04:00.656644  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.656898  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:00.659345  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659694  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.659722  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659878  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660405  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660584  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660663  141411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:00.660731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.660751  141411 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:00.660771  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.663331  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663458  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663717  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663757  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663789  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663802  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663867  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664039  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664044  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664201  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664202  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664359  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664359  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.664490  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.777379  141411 ssh_runner.go:195] Run: systemctl --version
	I1212 01:04:00.783765  141411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:04:00.933842  141411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:04:00.941376  141411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:04:00.941441  141411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:04:00.958993  141411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:04:00.959021  141411 start.go:495] detecting cgroup driver to use...
	I1212 01:04:00.959084  141411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:04:00.977166  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:04:00.991166  141411 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:04:00.991231  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:04:01.004993  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:04:01.018654  141411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:04:01.136762  141411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:04:01.300915  141411 docker.go:233] disabling docker service ...
	I1212 01:04:01.301036  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:04:01.316124  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:04:01.329544  141411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:04:01.451034  141411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:04:01.583471  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:04:01.611914  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:04:01.632628  141411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:04:01.632706  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.644315  141411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:04:01.644384  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.656980  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.668295  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.679885  141411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:04:01.692032  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.703893  141411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.724486  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.737251  141411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:04:01.748955  141411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:04:01.749025  141411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:04:01.763688  141411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:04:01.773871  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:01.903690  141411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:04:02.006921  141411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:04:02.007013  141411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:04:02.013116  141411 start.go:563] Will wait 60s for crictl version
	I1212 01:04:02.013187  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.017116  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:04:02.061210  141411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:04:02.061304  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.093941  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.124110  141411 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:59.708028  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:01.709056  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:04.207527  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.845618  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.346194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:02.125647  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:02.128481  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.128914  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:02.128973  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.129205  141411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:04:02.133801  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:02.148892  141411 kubeadm.go:883] updating cluster {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:04:02.149001  141411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:04:02.149033  141411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:04:02.187762  141411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:04:02.187805  141411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:04:02.187934  141411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.187988  141411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.188025  141411 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.188070  141411 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.188118  141411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.188220  141411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.188332  141411 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 01:04:02.188501  141411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.189594  141411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.189674  141411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.189892  141411 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.190015  141411 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 01:04:02.190121  141411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.190152  141411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.190169  141411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.190746  141411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.372557  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.375185  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.389611  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.394581  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.396799  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.408346  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1212 01:04:02.413152  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.438165  141411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1212 01:04:02.438217  141411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.438272  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.518752  141411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1212 01:04:02.518804  141411 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.518856  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.556287  141411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1212 01:04:02.556329  141411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.556371  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569629  141411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1212 01:04:02.569671  141411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.569683  141411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1212 01:04:02.569721  141411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.569731  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569770  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667454  141411 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1212 01:04:02.667511  141411 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.667510  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.667532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.667549  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667632  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.667644  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.667671  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.683807  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.784024  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.797709  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.797836  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.797848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.797969  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.822411  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.880580  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.927305  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.928532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.928661  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.938172  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.973083  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:03.023699  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 01:04:03.023813  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.069822  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 01:04:03.069879  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 01:04:03.069920  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 01:04:03.069945  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:03.069973  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:03.069990  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:03.070037  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 01:04:03.070116  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:03.094188  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1212 01:04:03.094210  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094229  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1212 01:04:03.094249  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094285  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1212 01:04:03.094313  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1212 01:04:03.094379  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1212 01:04:03.094399  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 01:04:03.094480  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:04.469173  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.174822  141411 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.080313699s)
	I1212 01:04:05.174869  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1212 01:04:05.174899  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.08062641s)
	I1212 01:04:05.174928  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1212 01:04:05.174968  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.174994  141411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 01:04:05.175034  141411 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.175086  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:05.175038  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.179340  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:06.207626  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:08.706815  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.843908  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:07.654693  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.479543185s)
	I1212 01:04:07.654721  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1212 01:04:07.654743  141411 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.654775  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.475408038s)
	I1212 01:04:07.654848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:07.654784  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.699286  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:09.647620  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.948278157s)
	I1212 01:04:09.647642  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.992718083s)
	I1212 01:04:09.647662  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1212 01:04:09.647683  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:04:09.647686  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647734  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647776  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:09.652886  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 01:04:11.112349  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.464585062s)
	I1212 01:04:11.112384  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1212 01:04:11.112412  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.112462  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.206933  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.208623  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.844442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:14.845189  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.083753  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.971262547s)
	I1212 01:04:13.083788  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1212 01:04:13.083821  141411 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:13.083878  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:17.087777  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.003870257s)
	I1212 01:04:17.087818  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1212 01:04:17.087853  141411 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:17.087917  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:15.707981  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:18.207205  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.345467  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:19.845255  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:17.734979  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:04:17.735041  141411 cache_images.go:123] Successfully loaded all cached images
	I1212 01:04:17.735049  141411 cache_images.go:92] duration metric: took 15.547226992s to LoadCachedImages
	I1212 01:04:17.735066  141411 kubeadm.go:934] updating node { 192.168.61.222 8443 v1.31.2 crio true true} ...
	I1212 01:04:17.735209  141411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-242725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:04:17.735311  141411 ssh_runner.go:195] Run: crio config
	I1212 01:04:17.780826  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:17.780850  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:17.780859  141411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:04:17.780882  141411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.222 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-242725 NodeName:no-preload-242725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:04:17.781025  141411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-242725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:04:17.781091  141411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:04:17.792290  141411 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:04:17.792374  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:04:17.802686  141411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:04:17.819496  141411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:04:17.836164  141411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1212 01:04:17.855844  141411 ssh_runner.go:195] Run: grep 192.168.61.222	control-plane.minikube.internal$ /etc/hosts
	I1212 01:04:17.860034  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:17.874418  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:18.011357  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:04:18.028641  141411 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725 for IP: 192.168.61.222
	I1212 01:04:18.028666  141411 certs.go:194] generating shared ca certs ...
	I1212 01:04:18.028683  141411 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:04:18.028880  141411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:04:18.028940  141411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:04:18.028954  141411 certs.go:256] generating profile certs ...
	I1212 01:04:18.029088  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.key
	I1212 01:04:18.029164  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key.f2ca822e
	I1212 01:04:18.029235  141411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key
	I1212 01:04:18.029404  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:04:18.029438  141411 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:04:18.029449  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:04:18.029485  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:04:18.029517  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:04:18.029555  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:04:18.029621  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:18.030313  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:04:18.082776  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:04:18.116012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:04:18.147385  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:04:18.180861  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:04:18.225067  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:04:18.255999  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:04:18.280193  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:04:18.304830  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:04:18.329012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:04:18.355462  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:04:18.379991  141411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:04:18.397637  141411 ssh_runner.go:195] Run: openssl version
	I1212 01:04:18.403727  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:04:18.415261  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419809  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419885  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.425687  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:04:18.438938  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:04:18.452150  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457050  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457116  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.463151  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:04:18.476193  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:04:18.489034  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493916  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493969  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.500285  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:04:18.513016  141411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:04:18.517996  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:04:18.524465  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:04:18.530607  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:04:18.536857  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:04:18.542734  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:04:18.548786  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:04:18.554771  141411 kubeadm.go:392] StartCluster: {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:04:18.554897  141411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:04:18.554950  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.593038  141411 cri.go:89] found id: ""
	I1212 01:04:18.593131  141411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:04:18.604527  141411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:04:18.604550  141411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:04:18.604605  141411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:04:18.614764  141411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:04:18.616082  141411 kubeconfig.go:125] found "no-preload-242725" server: "https://192.168.61.222:8443"
	I1212 01:04:18.618611  141411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:04:18.628709  141411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.222
	I1212 01:04:18.628741  141411 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:04:18.628753  141411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:04:18.628814  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.673970  141411 cri.go:89] found id: ""
	I1212 01:04:18.674067  141411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:04:18.692603  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:04:18.704916  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:04:18.704940  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:04:18.704999  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:04:18.714952  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:04:18.715015  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:04:18.724982  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:04:18.734756  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:04:18.734817  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:04:18.744528  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.753898  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:04:18.753955  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.763929  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:04:18.773108  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:04:18.773153  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:04:18.782710  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:04:18.792750  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:18.902446  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.056638  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154145942s)
	I1212 01:04:20.056677  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.275475  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.348697  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.483317  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:04:20.483487  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.983704  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.484485  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.526353  141411 api_server.go:72] duration metric: took 1.043031812s to wait for apiserver process to appear ...
	I1212 01:04:21.526389  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:04:21.526415  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:20.207458  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:22.212936  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.362548  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.362574  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.362586  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.380904  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.380939  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.527174  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.533112  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:24.533146  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.026678  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.031368  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.031409  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.526576  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.532260  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.532297  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:26.026741  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:26.031841  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:04:26.038198  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:04:26.038228  141411 api_server.go:131] duration metric: took 4.511829936s to wait for apiserver health ...
	I1212 01:04:26.038240  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:26.038249  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:26.040150  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:04:22.343994  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:24.344818  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.346428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.041669  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:04:26.055010  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:04:26.076860  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:04:26.092122  141411 system_pods.go:59] 8 kube-system pods found
	I1212 01:04:26.092154  141411 system_pods.go:61] "coredns-7c65d6cfc9-7w9dc" [878bfb78-fae5-4e05-b0ae-362841eace85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:04:26.092163  141411 system_pods.go:61] "etcd-no-preload-242725" [ed97c029-7933-4f4e-ab6c-f514b963ce21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:04:26.092170  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [df66d12b-b847-4ef3-b610-5679ff50e8c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:04:26.092175  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [eb5bc914-4267-41e8-9b37-26b7d3da9f68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:04:26.092180  141411 system_pods.go:61] "kube-proxy-rjwps" [fccefb3e-a282-4f0e-9070-11cc95bca868] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:04:26.092185  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [139de4ad-468c-4f1b-becf-3708bcaa7c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:04:26.092190  141411 system_pods.go:61] "metrics-server-6867b74b74-xzkbn" [16e0364c-18f9-43c2-9394-bc8548ce9caa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:04:26.092194  141411 system_pods.go:61] "storage-provisioner" [06c3232e-011a-4aff-b3ca-81858355bef4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:04:26.092200  141411 system_pods.go:74] duration metric: took 15.315757ms to wait for pod list to return data ...
	I1212 01:04:26.092208  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:04:26.095691  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:04:26.095715  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:04:26.095725  141411 node_conditions.go:105] duration metric: took 3.513466ms to run NodePressure ...
	I1212 01:04:26.095742  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:26.389652  141411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398484  141411 kubeadm.go:739] kubelet initialised
	I1212 01:04:26.398513  141411 kubeadm.go:740] duration metric: took 8.824036ms waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398524  141411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:04:26.406667  141411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.416093  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416137  141411 pod_ready.go:82] duration metric: took 9.418311ms for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.416151  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416165  141411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.422922  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422951  141411 pod_ready.go:82] duration metric: took 6.774244ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.422962  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422971  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.429822  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429854  141411 pod_ready.go:82] duration metric: took 6.874602ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.429866  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429875  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.483542  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483578  141411 pod_ready.go:82] duration metric: took 53.690915ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.483609  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483622  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:24.707572  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:27.207073  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.843868  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.844684  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:28.081872  141411 pod_ready.go:93] pod "kube-proxy-rjwps" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:28.081901  141411 pod_ready.go:82] duration metric: took 1.598267411s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:28.081921  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:30.088965  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:32.099574  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:29.706557  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:31.706767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:33.706983  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.344074  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.345401  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:34.588690  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:34.588715  141411 pod_ready.go:82] duration metric: took 6.50678624s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:34.588727  141411 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:36.596475  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:36.207357  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:38.207516  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.844602  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.845115  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.095215  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:41.594487  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.708001  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:42.708477  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.344150  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.844336  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:43.595231  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:46.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.708857  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:47.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.207408  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.844848  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.344941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:48.595492  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.095830  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.706544  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.843857  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:54.345207  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.095934  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.598377  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.706720  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.707883  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:56.843562  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.843654  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:01.343428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.095640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.596162  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.207281  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:02.706000  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:03.344025  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.844236  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:03.095197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.096865  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:04.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.208202  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:08.343968  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:10.844912  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.595940  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.596206  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.094527  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.707469  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.206124  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.206285  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:13.343045  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.343706  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.096430  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.097196  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.207697  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:17.344863  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:19.348835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.595465  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:20.708087  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:22.708298  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:21.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:24.344442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:26.344638  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:23.095266  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.096246  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.097041  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.208225  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.706184  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:28.850925  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.344959  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.595032  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.595989  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.706696  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:32.206728  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.206974  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:33.844343  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.343847  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.096631  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.594963  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.707213  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:39.207473  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:38.842756  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.845163  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:38.595482  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.595779  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:41.707367  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:44.206693  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:43.343827  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.346793  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:43.095993  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.096437  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.206828  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:48.708067  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.843200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:49.844564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:47.595931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:50.095312  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.096092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:51.206349  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:53.206853  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.344518  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.344667  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.594738  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:56.595036  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:57.207827  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.845550  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.344473  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:58.595556  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:00.595723  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.706748  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.709131  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:01.842981  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.843341  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.843811  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:02.597066  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:04.597793  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:07.095874  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:06.206955  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:08.208809  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:07.844408  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.343200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:09.595982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:12.094513  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.708379  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:13.207302  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:12.343941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.843333  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.095296  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:16.596411  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:15.706990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.707150  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.344804  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.347642  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.096235  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.594712  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.707303  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.707482  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:24.208937  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:21.842975  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.845498  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.343574  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.596224  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.094625  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.707610  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:29.208425  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:28.843986  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.343502  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:28.096043  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.594875  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.707976  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:34.208061  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:33.843106  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.344148  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:33.095175  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.095369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:37.095797  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.707296  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.207875  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:38.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.343589  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.594883  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.594919  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.707035  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.707860  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:43.346470  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.845269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.595640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.596624  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.708204  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:48.206947  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:48.344831  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.345010  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:47.596708  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.094517  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:52.094931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.706903  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:53.207824  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:52.843114  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.845370  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.095381  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:56.097745  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:55.207916  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.708907  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.344917  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:59.844269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:58.595415  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:01.095794  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.208708  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:02.707173  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:02.342899  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:04.343072  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:03.596239  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.094842  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:05.206813  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:07.209422  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 01:07:08.842564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.843885  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:08.095189  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.096580  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:09.707537  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.205862  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.207175  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:12.844629  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:15.344452  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.594761  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.595360  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:17.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.707535  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.208767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:17.851516  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:20.343210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.596696  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.095982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:21.706245  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:23.707282  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:22.344482  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.844735  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.594999  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:26.595500  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:25.707950  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.200781  141469 pod_ready.go:82] duration metric: took 4m0.000776844s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:28.200837  141469 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:28.200866  141469 pod_ready.go:39] duration metric: took 4m15.556500045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:28.200916  141469 kubeadm.go:597] duration metric: took 4m22.571399912s to restartPrimaryControlPlane
	W1212 01:07:28.201043  141469 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:28.201086  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:27.343233  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:29.344194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.596410  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.096062  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.843547  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.843911  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:36.343719  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.595369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:35.600094  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:38.844539  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.844997  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:38.096180  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:43.343026  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.343118  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:42.595856  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.096338  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.343485  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:48.337317  141884 pod_ready.go:82] duration metric: took 4m0.000178627s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:48.337358  141884 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:48.337386  141884 pod_ready.go:39] duration metric: took 4m14.601527023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:48.337421  141884 kubeadm.go:597] duration metric: took 4m22.883520304s to restartPrimaryControlPlane
	W1212 01:07:48.337486  141884 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:48.337526  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:47.595092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:50.096774  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.514069  141469 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312952103s)
	I1212 01:07:54.514153  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:54.543613  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:54.555514  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:54.569001  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:54.569024  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:54.569082  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:54.583472  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:54.583553  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:54.598721  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:54.614369  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:54.614451  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:54.625630  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.643317  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:54.643398  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.652870  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:54.662703  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:54.662774  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:54.672601  141469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:54.722949  141469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:07:54.723064  141469 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:54.845332  141469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:54.845476  141469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:54.845623  141469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:07:54.855468  141469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:54.857558  141469 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:54.857689  141469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:54.857774  141469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:54.857890  141469 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:54.857960  141469 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:54.858038  141469 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:54.858109  141469 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:54.858214  141469 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:54.858296  141469 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:54.858396  141469 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:54.858503  141469 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:54.858557  141469 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:54.858643  141469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:55.129859  141469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:55.274235  141469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:07:55.401999  141469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:56.015091  141469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:56.123268  141469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:56.123820  141469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:56.126469  141469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:52.595027  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:57.096606  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:56.128185  141469 out.go:235]   - Booting up control plane ...
	I1212 01:07:56.128343  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:56.128478  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:56.128577  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:56.149476  141469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:56.156042  141469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:56.156129  141469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:56.292423  141469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:07:56.292567  141469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:07:56.794594  141469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.027526ms
	I1212 01:07:56.794711  141469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:07:59.594764  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:01.595663  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:02.299386  141469 kubeadm.go:310] [api-check] The API server is healthy after 5.503154599s
	I1212 01:08:02.311549  141469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:02.326944  141469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:02.354402  141469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:02.354661  141469 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-607268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:02.368168  141469 kubeadm.go:310] [bootstrap-token] Using token: 0eo07f.wy46ulxfywwd0uy8
	I1212 01:08:02.369433  141469 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:02.369569  141469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:02.381945  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:02.407880  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:02.419211  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:02.426470  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:02.437339  141469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:02.708518  141469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:03.143189  141469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:03.704395  141469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:03.705460  141469 kubeadm.go:310] 
	I1212 01:08:03.705557  141469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:03.705576  141469 kubeadm.go:310] 
	I1212 01:08:03.705646  141469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:03.705650  141469 kubeadm.go:310] 
	I1212 01:08:03.705672  141469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:03.705724  141469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:03.705768  141469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:03.705800  141469 kubeadm.go:310] 
	I1212 01:08:03.705906  141469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:03.705918  141469 kubeadm.go:310] 
	I1212 01:08:03.705976  141469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:03.705987  141469 kubeadm.go:310] 
	I1212 01:08:03.706073  141469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:03.706191  141469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:03.706286  141469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:03.706307  141469 kubeadm.go:310] 
	I1212 01:08:03.706438  141469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:03.706549  141469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:03.706556  141469 kubeadm.go:310] 
	I1212 01:08:03.706670  141469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.706833  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:03.706864  141469 kubeadm.go:310] 	--control-plane 
	I1212 01:08:03.706869  141469 kubeadm.go:310] 
	I1212 01:08:03.706951  141469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:03.706963  141469 kubeadm.go:310] 
	I1212 01:08:03.707035  141469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.707134  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:03.708092  141469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:03.708135  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:08:03.708146  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:03.709765  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:03.711315  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:03.724767  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:03.749770  141469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:03.749830  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:03.749896  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-607268 minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=embed-certs-607268 minikube.k8s.io/primary=true
	I1212 01:08:03.973050  141469 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:03.973436  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.094838  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:06.095216  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:04.473952  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.974222  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.473799  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.974261  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.473492  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.974288  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.474064  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.974218  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:08.081567  141469 kubeadm.go:1113] duration metric: took 4.331794716s to wait for elevateKubeSystemPrivileges
	I1212 01:08:08.081603  141469 kubeadm.go:394] duration metric: took 5m2.502707851s to StartCluster
	I1212 01:08:08.081629  141469 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.081722  141469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:08.083443  141469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.083783  141469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:08.083894  141469 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:08.084015  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:08.084027  141469 addons.go:69] Setting metrics-server=true in profile "embed-certs-607268"
	I1212 01:08:08.084045  141469 addons.go:234] Setting addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:08.084014  141469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-607268"
	I1212 01:08:08.084054  141469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-607268"
	I1212 01:08:08.084083  141469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-607268"
	I1212 01:08:08.084085  141469 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-607268"
	W1212 01:08:08.084130  141469 addons.go:243] addon storage-provisioner should already be in state true
	W1212 01:08:08.084057  141469 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084618  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084658  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084671  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084684  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084617  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084756  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.085205  141469 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:08.086529  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:08.104090  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I1212 01:08:08.104115  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I1212 01:08:08.104092  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1212 01:08:08.104662  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104701  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104785  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105323  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105329  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105337  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105382  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105696  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105718  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105700  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.106132  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106163  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.106364  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.106599  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106626  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.110390  141469 addons.go:234] Setting addon default-storageclass=true in "embed-certs-607268"
	W1212 01:08:08.110415  141469 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:08.110447  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.110811  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.110844  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.124380  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1212 01:08:08.124888  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.125447  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.125472  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.125764  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.125966  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.126885  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1212 01:08:08.127417  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.127718  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1212 01:08:08.127911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.127990  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128002  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.128161  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.128338  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.128541  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.128612  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128626  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.129037  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.129640  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.129678  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.129905  141469 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:08.131337  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:08.131367  141469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:08.131387  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.131816  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.133335  141469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:08.134372  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.134696  141469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.134714  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:08.134734  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.134851  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.134868  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.135026  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.135247  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.135405  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.135549  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.137253  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137705  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.137725  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137810  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.137911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.138065  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.138162  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.146888  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I1212 01:08:08.147344  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.147919  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.147937  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.148241  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.148418  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.150018  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.150282  141469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.150299  141469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:08.150318  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.152881  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153311  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.153327  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.153344  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153509  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.153634  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.153816  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.301991  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:08.323794  141469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338205  141469 node_ready.go:49] node "embed-certs-607268" has status "Ready":"True"
	I1212 01:08:08.338241  141469 node_ready.go:38] duration metric: took 14.401624ms for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338255  141469 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:08.355801  141469 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:08.406624  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:08.406648  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:08.409497  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.456893  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:08.456917  141469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:08.554996  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.558767  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.558793  141469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:08.614574  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.702483  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702513  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.702818  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.702883  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.702894  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.702904  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702912  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.703142  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.703186  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.703163  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.714426  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.714450  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.714840  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.714857  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.821732  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266688284s)
	I1212 01:08:09.821807  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.821824  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822160  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822185  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.822211  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.822225  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822487  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.822518  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822535  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842157  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.227536232s)
	I1212 01:08:09.842222  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842237  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.842627  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.842663  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.842672  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842679  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842687  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.843002  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.843013  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.843028  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.843046  141469 addons.go:475] Verifying addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:09.844532  141469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:08.098516  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:10.596197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:09.845721  141469 addons.go:510] duration metric: took 1.761839241s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:10.400164  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:12.862616  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:14.362448  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.362473  141469 pod_ready.go:82] duration metric: took 6.006632075s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.362486  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868198  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.868220  141469 pod_ready.go:82] duration metric: took 505.72656ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868231  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872557  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.872582  141469 pod_ready.go:82] duration metric: took 4.343797ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872599  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876837  141469 pod_ready.go:93] pod "kube-proxy-6hw4b" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.876858  141469 pod_ready.go:82] duration metric: took 4.251529ms for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876867  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881467  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.881487  141469 pod_ready.go:82] duration metric: took 4.612567ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881496  141469 pod_ready.go:39] duration metric: took 6.543228562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:14.881516  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:14.881571  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.898899  141469 api_server.go:72] duration metric: took 6.815070313s to wait for apiserver process to appear ...
	I1212 01:08:14.898942  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:14.898963  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:08:14.904555  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:08:14.905738  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:14.905762  141469 api_server.go:131] duration metric: took 6.812513ms to wait for apiserver health ...
	I1212 01:08:14.905771  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:14.964381  141469 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:14.964413  141469 system_pods.go:61] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:14.964418  141469 system_pods.go:61] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:14.964422  141469 system_pods.go:61] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:14.964426  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:14.964429  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:14.964432  141469 system_pods.go:61] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:14.964435  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:14.964441  141469 system_pods.go:61] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:14.964447  141469 system_pods.go:61] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:14.964460  141469 system_pods.go:74] duration metric: took 58.68072ms to wait for pod list to return data ...
	I1212 01:08:14.964476  141469 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:15.161106  141469 default_sa.go:45] found service account: "default"
	I1212 01:08:15.161137  141469 default_sa.go:55] duration metric: took 196.651344ms for default service account to be created ...
	I1212 01:08:15.161147  141469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:15.363429  141469 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:15.363457  141469 system_pods.go:89] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:15.363462  141469 system_pods.go:89] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:15.363466  141469 system_pods.go:89] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:15.363470  141469 system_pods.go:89] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:15.363473  141469 system_pods.go:89] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:15.363477  141469 system_pods.go:89] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:15.363480  141469 system_pods.go:89] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:15.363487  141469 system_pods.go:89] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:15.363492  141469 system_pods.go:89] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:15.363501  141469 system_pods.go:126] duration metric: took 202.347796ms to wait for k8s-apps to be running ...
	I1212 01:08:15.363508  141469 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:15.363553  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:15.378498  141469 system_svc.go:56] duration metric: took 14.977368ms WaitForService to wait for kubelet
	I1212 01:08:15.378527  141469 kubeadm.go:582] duration metric: took 7.294704666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:15.378545  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:15.561384  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:15.561408  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:15.561422  141469 node_conditions.go:105] duration metric: took 182.869791ms to run NodePressure ...
	I1212 01:08:15.561435  141469 start.go:241] waiting for startup goroutines ...
	I1212 01:08:15.561442  141469 start.go:246] waiting for cluster config update ...
	I1212 01:08:15.561453  141469 start.go:255] writing updated cluster config ...
	I1212 01:08:15.561693  141469 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:15.615106  141469 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:15.617073  141469 out.go:177] * Done! kubectl is now configured to use "embed-certs-607268" cluster and "default" namespace by default
	I1212 01:08:14.771660  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.434092304s)
	I1212 01:08:14.771750  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:14.802721  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:08:14.813349  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:08:14.826608  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:08:14.826637  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:08:14.826693  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:08:14.842985  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:08:14.843060  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:08:14.855326  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:08:14.872371  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:08:14.872449  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:08:14.883793  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.894245  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:08:14.894306  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.906163  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:08:14.915821  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:08:14.915867  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:08:14.926019  141884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:08:15.092424  141884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:13.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:15.096259  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:17.596953  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:20.095957  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:22.096970  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:23.562216  141884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:08:23.562302  141884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:08:23.562463  141884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:08:23.562655  141884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:08:23.562786  141884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:08:23.562870  141884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:08:23.564412  141884 out.go:235]   - Generating certificates and keys ...
	I1212 01:08:23.564519  141884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:08:23.564605  141884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:08:23.564718  141884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:08:23.564802  141884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:08:23.564879  141884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:08:23.564925  141884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:08:23.565011  141884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:08:23.565110  141884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:08:23.565230  141884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:08:23.565352  141884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:08:23.565393  141884 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:08:23.565439  141884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:08:23.565485  141884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:08:23.565537  141884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:08:23.565582  141884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:08:23.565636  141884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:08:23.565700  141884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:08:23.565786  141884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:08:23.565885  141884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:08:23.567104  141884 out.go:235]   - Booting up control plane ...
	I1212 01:08:23.567195  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:08:23.567267  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:08:23.567353  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:08:23.567472  141884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:08:23.567579  141884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:08:23.567662  141884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:08:23.567812  141884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:08:23.567953  141884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:08:23.568010  141884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001996966s
	I1212 01:08:23.568071  141884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:08:23.568125  141884 kubeadm.go:310] [api-check] The API server is healthy after 5.001946459s
	I1212 01:08:23.568266  141884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:23.568424  141884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:23.568510  141884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:23.568702  141884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-076578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:23.568789  141884 kubeadm.go:310] [bootstrap-token] Using token: 472xql.x3zqihc9l5oj308m
	I1212 01:08:23.570095  141884 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:23.570226  141884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:23.570353  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:23.570550  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:23.570719  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:23.570880  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:23.571006  141884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:23.571186  141884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:23.571245  141884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:23.571322  141884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:23.571333  141884 kubeadm.go:310] 
	I1212 01:08:23.571411  141884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:23.571421  141884 kubeadm.go:310] 
	I1212 01:08:23.571530  141884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:23.571551  141884 kubeadm.go:310] 
	I1212 01:08:23.571609  141884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:23.571711  141884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:23.571795  141884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:23.571808  141884 kubeadm.go:310] 
	I1212 01:08:23.571892  141884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:23.571907  141884 kubeadm.go:310] 
	I1212 01:08:23.571985  141884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:23.571992  141884 kubeadm.go:310] 
	I1212 01:08:23.572069  141884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:23.572184  141884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:23.572276  141884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:23.572286  141884 kubeadm.go:310] 
	I1212 01:08:23.572413  141884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:23.572516  141884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:23.572525  141884 kubeadm.go:310] 
	I1212 01:08:23.572656  141884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.572805  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:23.572847  141884 kubeadm.go:310] 	--control-plane 
	I1212 01:08:23.572856  141884 kubeadm.go:310] 
	I1212 01:08:23.572973  141884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:23.572991  141884 kubeadm.go:310] 
	I1212 01:08:23.573107  141884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.573248  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:23.573273  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:08:23.573283  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:23.574736  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:23.575866  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:23.590133  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:23.613644  141884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:23.613737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:23.613759  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-076578 minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=default-k8s-diff-port-076578 minikube.k8s.io/primary=true
	I1212 01:08:23.642646  141884 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:23.831478  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.331749  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.832158  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.331630  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.831737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:26.331787  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.597126  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:27.095607  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:26.831860  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.331748  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.448891  141884 kubeadm.go:1113] duration metric: took 3.835231667s to wait for elevateKubeSystemPrivileges
	I1212 01:08:27.448930  141884 kubeadm.go:394] duration metric: took 5m2.053707834s to StartCluster
	I1212 01:08:27.448957  141884 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.449060  141884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:27.450918  141884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.451183  141884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:27.451263  141884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:27.451385  141884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451409  141884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451417  141884 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:08:27.451413  141884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451449  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:27.451454  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451465  141884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-076578"
	I1212 01:08:27.451423  141884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451570  141884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451586  141884 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:27.451648  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451876  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451905  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451927  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.451942  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452055  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.452096  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452939  141884 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:27.454521  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:27.467512  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1212 01:08:27.467541  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1212 01:08:27.467581  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1212 01:08:27.468032  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468069  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468039  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468580  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468592  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468604  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468609  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468620  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468635  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468968  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.469191  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.469562  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469579  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469613  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.469623  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.472898  141884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.472925  141884 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:27.472956  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.473340  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.473389  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.485014  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I1212 01:08:27.485438  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.486058  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.486077  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.486629  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.486832  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.487060  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1212 01:08:27.487779  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.488503  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.488527  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.488910  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.489132  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.489304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.489892  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1212 01:08:27.490599  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.490758  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.491213  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.491236  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.491385  141884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:27.491606  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.492230  141884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:27.492375  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.492420  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.493368  141884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.493382  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:27.493397  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.493462  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:27.493468  141884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:27.493481  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.496807  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497273  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.497304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497474  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.497647  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.497691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497771  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.497922  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.498178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.498190  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.498288  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.498467  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.498634  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.498779  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.512025  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1212 01:08:27.512490  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.513168  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.513187  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.513474  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.513664  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.514930  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.515106  141884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.515119  141884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:27.515131  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.520051  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520084  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.520183  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520419  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.520574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.520737  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.520828  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.692448  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:27.712214  141884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724269  141884 node_ready.go:49] node "default-k8s-diff-port-076578" has status "Ready":"True"
	I1212 01:08:27.724301  141884 node_ready.go:38] duration metric: took 12.044784ms for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724313  141884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:27.729135  141884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:27.768566  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:27.768596  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:27.782958  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.797167  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:27.797190  141884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:27.828960  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:27.828983  141884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:27.871251  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.883614  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:28.198044  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198090  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198457  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198510  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198522  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.198532  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198544  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198817  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198815  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198844  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.277379  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.277405  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.277719  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.277741  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955418  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084128053s)
	I1212 01:08:28.955472  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955561  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071904294s)
	I1212 01:08:28.955624  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955646  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955856  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.955874  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955881  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955888  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.957731  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957740  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957748  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957761  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957802  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957814  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957823  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.957836  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.958072  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.958090  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.958100  141884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-076578"
	I1212 01:08:28.959879  141884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:28.961027  141884 addons.go:510] duration metric: took 1.509771178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:29.241061  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:29.241090  141884 pod_ready.go:82] duration metric: took 1.511925292s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:29.241106  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:31.247610  141884 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:29.095906  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:31.593942  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:33.246910  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.246933  141884 pod_ready.go:82] duration metric: took 4.005818542s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.246944  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753325  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.753350  141884 pod_ready.go:82] duration metric: took 506.39921ms for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753360  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758733  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.758759  141884 pod_ready.go:82] duration metric: took 5.391762ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758769  141884 pod_ready.go:39] duration metric: took 6.034446537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:33.758789  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:33.758854  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:33.774952  141884 api_server.go:72] duration metric: took 6.323732468s to wait for apiserver process to appear ...
	I1212 01:08:33.774976  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:33.774995  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:08:33.780463  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:08:33.781364  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:33.781387  141884 api_server.go:131] duration metric: took 6.404187ms to wait for apiserver health ...
	I1212 01:08:33.781396  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:33.786570  141884 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:33.786591  141884 system_pods.go:61] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.786596  141884 system_pods.go:61] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.786599  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.786603  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.786606  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.786610  141884 system_pods.go:61] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.786615  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.786623  141884 system_pods.go:61] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.786630  141884 system_pods.go:61] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.786643  141884 system_pods.go:74] duration metric: took 5.239236ms to wait for pod list to return data ...
	I1212 01:08:33.786655  141884 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:33.789776  141884 default_sa.go:45] found service account: "default"
	I1212 01:08:33.789794  141884 default_sa.go:55] duration metric: took 3.13371ms for default service account to be created ...
	I1212 01:08:33.789801  141884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:33.794118  141884 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:33.794139  141884 system_pods.go:89] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.794145  141884 system_pods.go:89] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.794149  141884 system_pods.go:89] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.794154  141884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.794157  141884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.794161  141884 system_pods.go:89] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.794165  141884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.794170  141884 system_pods.go:89] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.794177  141884 system_pods.go:89] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.794185  141884 system_pods.go:126] duration metric: took 4.378791ms to wait for k8s-apps to be running ...
	I1212 01:08:33.794194  141884 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:33.794233  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:33.809257  141884 system_svc.go:56] duration metric: took 15.051528ms WaitForService to wait for kubelet
	I1212 01:08:33.809290  141884 kubeadm.go:582] duration metric: took 6.358073584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:33.809323  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:33.813154  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:33.813174  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:33.813183  141884 node_conditions.go:105] duration metric: took 3.85493ms to run NodePressure ...
	I1212 01:08:33.813194  141884 start.go:241] waiting for startup goroutines ...
	I1212 01:08:33.813200  141884 start.go:246] waiting for cluster config update ...
	I1212 01:08:33.813210  141884 start.go:255] writing updated cluster config ...
	I1212 01:08:33.813474  141884 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:33.862511  141884 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:33.864367  141884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-076578" cluster and "default" namespace by default
	I1212 01:08:33.594621  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:34.589133  141411 pod_ready.go:82] duration metric: took 4m0.000384717s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	E1212 01:08:34.589166  141411 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:08:34.589184  141411 pod_ready.go:39] duration metric: took 4m8.190648334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:34.589214  141411 kubeadm.go:597] duration metric: took 4m15.984656847s to restartPrimaryControlPlane
	W1212 01:08:34.589299  141411 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:08:34.589327  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:00.919650  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.330292422s)
	I1212 01:09:00.919762  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:00.956649  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:09:00.976311  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:00.999339  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:00.999364  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:00.999413  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:01.013048  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:01.013112  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:01.027407  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:01.036801  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:01.036854  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:01.046865  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.056325  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:01.056390  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.066574  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:01.078080  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:01.078130  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:01.088810  141411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:01.249481  141411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:09.318633  141411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:09:09.318694  141411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:09:09.318789  141411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:09:09.318924  141411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:09:09.319074  141411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:09:09.319185  141411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:09:09.320615  141411 out.go:235]   - Generating certificates and keys ...
	I1212 01:09:09.320710  141411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:09:09.320803  141411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:09:09.320886  141411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:09:09.320957  141411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:09:09.321061  141411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:09:09.321118  141411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:09:09.321188  141411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:09:09.321249  141411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:09:09.321334  141411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:09:09.321442  141411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:09:09.321516  141411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:09:09.321611  141411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:09:09.321698  141411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:09:09.321775  141411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:09:09.321849  141411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:09:09.321924  141411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:09:09.321973  141411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:09:09.322099  141411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:09:09.322204  141411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:09:09.323661  141411 out.go:235]   - Booting up control plane ...
	I1212 01:09:09.323780  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:09:09.323864  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:09:09.323950  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:09:09.324082  141411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:09:09.324181  141411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:09:09.324255  141411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:09:09.324431  141411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:09:09.324571  141411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:09:09.324647  141411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.39943ms
	I1212 01:09:09.324730  141411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:09:09.324780  141411 kubeadm.go:310] [api-check] The API server is healthy after 5.001520724s
	I1212 01:09:09.324876  141411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:09:09.325036  141411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:09:09.325136  141411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:09:09.325337  141411 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-242725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:09:09.325401  141411 kubeadm.go:310] [bootstrap-token] Using token: k8uf20.0v0t2d7mhtmwxurz
	I1212 01:09:09.326715  141411 out.go:235]   - Configuring RBAC rules ...
	I1212 01:09:09.326840  141411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:09:09.326938  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:09:09.327149  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:09:09.327329  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:09:09.327498  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:09:09.327643  141411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:09:09.327787  141411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:09:09.327852  141411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:09:09.327926  141411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:09:09.327935  141411 kubeadm.go:310] 
	I1212 01:09:09.328027  141411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:09:09.328036  141411 kubeadm.go:310] 
	I1212 01:09:09.328138  141411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:09:09.328148  141411 kubeadm.go:310] 
	I1212 01:09:09.328183  141411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:09:09.328253  141411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:09:09.328302  141411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:09:09.328308  141411 kubeadm.go:310] 
	I1212 01:09:09.328396  141411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:09:09.328413  141411 kubeadm.go:310] 
	I1212 01:09:09.328478  141411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:09:09.328489  141411 kubeadm.go:310] 
	I1212 01:09:09.328554  141411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:09:09.328643  141411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:09:09.328719  141411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:09:09.328727  141411 kubeadm.go:310] 
	I1212 01:09:09.328797  141411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:09:09.328885  141411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:09:09.328894  141411 kubeadm.go:310] 
	I1212 01:09:09.328997  141411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329096  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:09:09.329120  141411 kubeadm.go:310] 	--control-plane 
	I1212 01:09:09.329126  141411 kubeadm.go:310] 
	I1212 01:09:09.329201  141411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:09:09.329209  141411 kubeadm.go:310] 
	I1212 01:09:09.329276  141411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329374  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:09:09.329386  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:09:09.329393  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:09:09.330870  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:09:09.332191  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:09:09.345593  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:09:09.366177  141411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:09:09.366234  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:09.366252  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-242725 minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=no-preload-242725 minikube.k8s.io/primary=true
	I1212 01:09:09.589709  141411 ops.go:34] apiserver oom_adj: -16
	I1212 01:09:09.589889  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.090703  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.590697  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.090698  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.590027  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.090413  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.590626  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.090322  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.590174  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.090032  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.233581  141411 kubeadm.go:1113] duration metric: took 4.867404479s to wait for elevateKubeSystemPrivileges
	I1212 01:09:14.233636  141411 kubeadm.go:394] duration metric: took 4m55.678870659s to StartCluster
	I1212 01:09:14.233674  141411 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.233790  141411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:09:14.236087  141411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.236385  141411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:09:14.236460  141411 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:09:14.236567  141411 addons.go:69] Setting storage-provisioner=true in profile "no-preload-242725"
	I1212 01:09:14.236583  141411 addons.go:69] Setting default-storageclass=true in profile "no-preload-242725"
	I1212 01:09:14.236610  141411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-242725"
	I1212 01:09:14.236611  141411 addons.go:69] Setting metrics-server=true in profile "no-preload-242725"
	I1212 01:09:14.236631  141411 addons.go:234] Setting addon metrics-server=true in "no-preload-242725"
	W1212 01:09:14.236646  141411 addons.go:243] addon metrics-server should already be in state true
	I1212 01:09:14.236682  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.236588  141411 addons.go:234] Setting addon storage-provisioner=true in "no-preload-242725"
	I1212 01:09:14.236687  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1212 01:09:14.236712  141411 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:09:14.236838  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.237093  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237141  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237185  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237101  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237227  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237235  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237863  141411 out.go:177] * Verifying Kubernetes components...
	I1212 01:09:14.239284  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:09:14.254182  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1212 01:09:14.254405  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I1212 01:09:14.254418  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 01:09:14.254742  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254857  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254874  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255388  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255415  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255439  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255803  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255814  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255807  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.256218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.256360  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256396  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.256524  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256567  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.259313  141411 addons.go:234] Setting addon default-storageclass=true in "no-preload-242725"
	W1212 01:09:14.259330  141411 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:09:14.259357  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.259575  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.259621  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.273148  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1212 01:09:14.273601  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.273909  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1212 01:09:14.274174  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274200  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274282  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.274560  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.274785  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274801  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274866  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.275126  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.275280  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.276840  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.277013  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.278945  141411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:09:14.279016  141411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.280219  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:09:14.280239  141411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:09:14.280268  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.280440  141411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.280450  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:09:14.280464  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.281368  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I1212 01:09:14.282054  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.282652  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.282673  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.283314  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.283947  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.283990  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.284230  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284232  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284802  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.284830  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285052  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285088  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.285106  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285247  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285458  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285483  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285619  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285624  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.285761  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285880  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.323872  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I1212 01:09:14.324336  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.324884  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.324906  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.325248  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.325437  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.326991  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.327217  141411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.327237  141411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:09:14.327258  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.330291  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.330895  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.330910  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.330926  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.331062  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.331219  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.331343  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.411182  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:09:14.454298  141411 node_ready.go:35] waiting up to 6m0s for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467328  141411 node_ready.go:49] node "no-preload-242725" has status "Ready":"True"
	I1212 01:09:14.467349  141411 node_ready.go:38] duration metric: took 13.017274ms for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467359  141411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:14.482865  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:14.557685  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.594366  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.602730  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:09:14.602760  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:09:14.666446  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:09:14.666474  141411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:09:14.746040  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.746075  141411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:09:14.799479  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.862653  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.862688  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863687  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.863706  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.863721  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.863730  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863740  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:14.863988  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.864007  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878604  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.878630  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.878903  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.878944  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878914  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.914665  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320255607s)
	I1212 01:09:15.914726  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.914741  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915158  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.915204  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915219  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:15.915236  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.915249  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915499  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915528  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.106582  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.307047373s)
	I1212 01:09:16.106635  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.106652  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107000  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107020  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107030  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.107037  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107298  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107317  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107328  141411 addons.go:475] Verifying addon metrics-server=true in "no-preload-242725"
	I1212 01:09:16.107305  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:16.108981  141411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:09:16.110608  141411 addons.go:510] duration metric: took 1.874161814s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:09:16.498983  141411 pod_ready.go:103] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:09:16.989762  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:16.989784  141411 pod_ready.go:82] duration metric: took 2.506893862s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:16.989795  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996560  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:17.996582  141411 pod_ready.go:82] duration metric: took 1.00678165s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996593  141411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002275  141411 pod_ready.go:93] pod "etcd-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.002294  141411 pod_ready.go:82] duration metric: took 5.694407ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002308  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006942  141411 pod_ready.go:93] pod "kube-apiserver-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.006965  141411 pod_ready.go:82] duration metric: took 4.650802ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006978  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011581  141411 pod_ready.go:93] pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.011621  141411 pod_ready.go:82] duration metric: took 4.634646ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011634  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187112  141411 pod_ready.go:93] pod "kube-proxy-5kc2s" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.187143  141411 pod_ready.go:82] duration metric: took 175.498685ms for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187156  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.586974  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.587003  141411 pod_ready.go:82] duration metric: took 399.836187ms for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.587012  141411 pod_ready.go:39] duration metric: took 4.119642837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:18.587032  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:09:18.587091  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:18.603406  141411 api_server.go:72] duration metric: took 4.366985373s to wait for apiserver process to appear ...
	I1212 01:09:18.603446  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:09:18.603473  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:09:18.609003  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:09:18.609950  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:09:18.609968  141411 api_server.go:131] duration metric: took 6.513408ms to wait for apiserver health ...
	I1212 01:09:18.609976  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:09:18.790460  141411 system_pods.go:59] 9 kube-system pods found
	I1212 01:09:18.790494  141411 system_pods.go:61] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:18.790502  141411 system_pods.go:61] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:18.790507  141411 system_pods.go:61] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:18.790510  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:18.790515  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:18.790520  141411 system_pods.go:61] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:18.790525  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:18.790534  141411 system_pods.go:61] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:18.790540  141411 system_pods.go:61] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:18.790556  141411 system_pods.go:74] duration metric: took 180.570066ms to wait for pod list to return data ...
	I1212 01:09:18.790566  141411 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:09:18.987130  141411 default_sa.go:45] found service account: "default"
	I1212 01:09:18.987172  141411 default_sa.go:55] duration metric: took 196.594497ms for default service account to be created ...
	I1212 01:09:18.987185  141411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:09:19.189233  141411 system_pods.go:86] 9 kube-system pods found
	I1212 01:09:19.189262  141411 system_pods.go:89] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:19.189267  141411 system_pods.go:89] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:19.189271  141411 system_pods.go:89] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:19.189274  141411 system_pods.go:89] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:19.189290  141411 system_pods.go:89] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:19.189294  141411 system_pods.go:89] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:19.189300  141411 system_pods.go:89] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:19.189308  141411 system_pods.go:89] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:19.189318  141411 system_pods.go:89] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:19.189331  141411 system_pods.go:126] duration metric: took 202.137957ms to wait for k8s-apps to be running ...
	I1212 01:09:19.189341  141411 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:09:19.189391  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:19.204241  141411 system_svc.go:56] duration metric: took 14.889522ms WaitForService to wait for kubelet
	I1212 01:09:19.204272  141411 kubeadm.go:582] duration metric: took 4.967858935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:09:19.204289  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:09:19.387735  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:09:19.387760  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:09:19.387768  141411 node_conditions.go:105] duration metric: took 183.47486ms to run NodePressure ...
	I1212 01:09:19.387780  141411 start.go:241] waiting for startup goroutines ...
	I1212 01:09:19.387787  141411 start.go:246] waiting for cluster config update ...
	I1212 01:09:19.387796  141411 start.go:255] writing updated cluster config ...
	I1212 01:09:19.388041  141411 ssh_runner.go:195] Run: rm -f paused
	I1212 01:09:19.437923  141411 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:09:19.439913  141411 out.go:177] * Done! kubectl is now configured to use "no-preload-242725" cluster and "default" namespace by default
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 
	
	
	==> CRI-O <==
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.435977470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966457435954664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2eff3a2c-6d6c-4eba-ad84-d8d5987852aa name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.436737959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbf21252-3d80-4061-b2aa-106e669d7bfb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.436785733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbf21252-3d80-4061-b2aa-106e669d7bfb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.436820010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fbf21252-3d80-4061-b2aa-106e669d7bfb name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.469134856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29b77914-7412-4eb2-8bcc-f73be0da51ad name=/runtime.v1.RuntimeService/Version
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.469247294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29b77914-7412-4eb2-8bcc-f73be0da51ad name=/runtime.v1.RuntimeService/Version
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.470495232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e409404-c948-44ce-a59b-58082a7ca7ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.470896132Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966457470876430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e409404-c948-44ce-a59b-58082a7ca7ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.471370798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6fd16e4-f281-40b7-8775-b7b675b7e442 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.471422365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6fd16e4-f281-40b7-8775-b7b675b7e442 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.471520678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f6fd16e4-f281-40b7-8775-b7b675b7e442 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.503391971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5dd4a036-c085-4ccc-99d2-37a337a50d2e name=/runtime.v1.RuntimeService/Version
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.503520063Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5dd4a036-c085-4ccc-99d2-37a337a50d2e name=/runtime.v1.RuntimeService/Version
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.504379532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40974df6-ed54-4c72-901c-680e28c1abe1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.504829917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966457504800086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40974df6-ed54-4c72-901c-680e28c1abe1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.505979960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=350a4739-378a-4510-b171-81f55a661786 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.506032242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=350a4739-378a-4510-b171-81f55a661786 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.506065980Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=350a4739-378a-4510-b171-81f55a661786 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.538584790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=db4528cc-7d05-44bf-87a6-62e1eb76e889 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.538674024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=db4528cc-7d05-44bf-87a6-62e1eb76e889 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.540361945Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=16061a4b-956d-4ace-90e5-9ef1352611a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.540799275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966457540776276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=16061a4b-956d-4ace-90e5-9ef1352611a1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.541333067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd1c920e-567c-42fe-a129-bd932bc7e45f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.541411731Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd1c920e-567c-42fe-a129-bd932bc7e45f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:20:57 old-k8s-version-738445 crio[636]: time="2024-12-12 01:20:57.541495778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dd1c920e-567c-42fe-a129-bd932bc7e45f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055186] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.154525] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.857593] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.677106] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.928690] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.061807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069660] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.204368] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.145806] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.275893] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +7.875714] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.056265] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.046586] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[Dec12 01:04] kauditd_printk_skb: 46 callbacks suppressed
	[Dec12 01:07] systemd-fstab-generator[5072]: Ignoring "noauto" option for root device
	[Dec12 01:09] systemd-fstab-generator[5350]: Ignoring "noauto" option for root device
	[  +0.066882] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:20:57 up 17 min,  0 users,  load average: 0.05, 0.06, 0.06
	Linux old-k8s-version-738445 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/dial.go:580 +0x5e5
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: net.(*sysDialer).dialSerial(0xc000969000, 0x4f7fe40, 0xc000b6d260, 0xc00096feb0, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/dial.go:548 +0x152
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: net.(*Dialer).DialContext(0xc000193da0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000caa210, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00092b580, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000caa210, 0x24, 0x60, 0x7f832099fc08, 0x118, ...)
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: net/http.(*Transport).dial(0xc0008723c0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000caa210, 0x24, 0x0, 0x0, 0x0, ...)
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: net/http.(*Transport).dialConn(0xc0008723c0, 0x4f7fe00, 0xc000120018, 0x0, 0xc000a612c0, 0x5, 0xc000caa210, 0x24, 0x0, 0xc000994d80, ...)
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: net/http.(*Transport).dialConnFor(0xc0008723c0, 0xc0009964d0)
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]: created by net/http.(*Transport).queueForDial
	Dec 12 01:20:52 old-k8s-version-738445 kubelet[6524]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Dec 12 01:20:52 old-k8s-version-738445 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 12 01:20:52 old-k8s-version-738445 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:20:52 old-k8s-version-738445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Dec 12 01:20:52 old-k8s-version-738445 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 12 01:20:52 old-k8s-version-738445 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 12 01:20:53 old-k8s-version-738445 kubelet[6534]: I1212 01:20:53.078810    6534 server.go:416] Version: v1.20.0
	Dec 12 01:20:53 old-k8s-version-738445 kubelet[6534]: I1212 01:20:53.079054    6534 server.go:837] Client rotation is on, will bootstrap in background
	Dec 12 01:20:53 old-k8s-version-738445 kubelet[6534]: I1212 01:20:53.081742    6534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 12 01:20:53 old-k8s-version-738445 kubelet[6534]: I1212 01:20:53.082721    6534 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 12 01:20:53 old-k8s-version-738445 kubelet[6534]: W1212 01:20:53.082733    6534 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (249.955648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-738445" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-607268 -n embed-certs-607268
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-12 01:24:13.217439301 +0000 UTC m=+6651.309106903
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-607268 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-607268 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.666µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-607268 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-607268 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-607268 logs -n 25: (1.308052233s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:23 UTC |
	| start   | -p newest-cni-819544 --memory=2200 --alsologtostderr   | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:23 UTC |
	| start   | -p auto-018985 --memory=3072                           | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819544             | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:24 UTC | 12 Dec 24 01:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819544                                   | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 01:23:49
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:23:49.034119  149302 out.go:345] Setting OutFile to fd 1 ...
	I1212 01:23:49.034251  149302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 01:23:49.034263  149302 out.go:358] Setting ErrFile to fd 2...
	I1212 01:23:49.034269  149302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 01:23:49.034485  149302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 01:23:49.035092  149302 out.go:352] Setting JSON to false
	I1212 01:23:49.036087  149302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":14771,"bootTime":1733951858,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:23:49.036202  149302 start.go:139] virtualization: kvm guest
	I1212 01:23:49.038485  149302 out.go:177] * [auto-018985] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 01:23:49.040126  149302 notify.go:220] Checking for updates...
	I1212 01:23:49.040151  149302 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 01:23:49.041763  149302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:23:49.043439  149302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:23:49.044848  149302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:23:49.046329  149302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:23:49.048040  149302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:23:49.050029  149302 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:49.050168  149302 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:49.050327  149302 config.go:182] Loaded profile config "newest-cni-819544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:49.050440  149302 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 01:23:49.088816  149302 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 01:23:49.090352  149302 start.go:297] selected driver: kvm2
	I1212 01:23:49.090372  149302 start.go:901] validating driver "kvm2" against <nil>
	I1212 01:23:49.090393  149302 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:23:49.091098  149302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:23:49.091184  149302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 01:23:49.107584  149302 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 01:23:49.107661  149302 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1212 01:23:49.108022  149302 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:23:49.108065  149302 cni.go:84] Creating CNI manager for ""
	I1212 01:23:49.108119  149302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:23:49.108134  149302 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 01:23:49.108203  149302 start.go:340] cluster config:
	{Name:auto-018985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:49.108336  149302 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:23:49.110235  149302 out.go:177] * Starting "auto-018985" primary control-plane node in "auto-018985" cluster
	I1212 01:23:49.111587  149302 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:23:49.111652  149302 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 01:23:49.111662  149302 cache.go:56] Caching tarball of preloaded images
	I1212 01:23:49.111772  149302 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 01:23:49.111786  149302 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 01:23:49.111917  149302 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/config.json ...
	I1212 01:23:49.111943  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/config.json: {Name:mkf47cdecfbd659296002d482dcac3147a4e6b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:49.112127  149302 start.go:360] acquireMachinesLock for auto-018985: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:23:49.112171  149302 start.go:364] duration metric: took 26.018µs to acquireMachinesLock for "auto-018985"
	I1212 01:23:49.112193  149302 start.go:93] Provisioning new machine with config: &{Name:auto-018985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.2 ClusterName:auto-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:23:49.112284  149302 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 01:23:46.407280  148785 out.go:235]   - Booting up control plane ...
	I1212 01:23:46.407391  148785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:23:46.407754  148785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:23:46.409476  148785 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:23:46.427944  148785 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:23:46.434950  148785 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:23:46.435022  148785 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:23:46.598182  148785 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:23:46.598345  148785 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:23:47.099000  148785 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.278251ms
	I1212 01:23:47.099086  148785 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:23:52.597352  148785 kubeadm.go:310] [api-check] The API server is healthy after 5.501665917s
	I1212 01:23:52.621181  148785 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:23:52.653828  148785 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:23:52.700954  148785 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:23:52.701276  148785 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-819544 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:23:52.726349  148785 kubeadm.go:310] [bootstrap-token] Using token: vjmv10.w9vmzb1zisszaa3k
	I1212 01:23:52.727990  148785 out.go:235]   - Configuring RBAC rules ...
	I1212 01:23:52.728167  148785 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:23:52.736607  148785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:23:52.747870  148785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:23:52.752911  148785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:23:52.762121  148785 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:23:52.768763  148785 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:23:53.005870  148785 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:23:53.489298  148785 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:23:54.004157  148785 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:23:54.005176  148785 kubeadm.go:310] 
	I1212 01:23:54.005299  148785 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:23:54.005353  148785 kubeadm.go:310] 
	I1212 01:23:54.005507  148785 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:23:54.005522  148785 kubeadm.go:310] 
	I1212 01:23:54.005554  148785 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:23:54.005634  148785 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:23:54.005731  148785 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:23:54.005743  148785 kubeadm.go:310] 
	I1212 01:23:54.005820  148785 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:23:54.005846  148785 kubeadm.go:310] 
	I1212 01:23:54.005916  148785 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:23:54.005922  148785 kubeadm.go:310] 
	I1212 01:23:54.006013  148785 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:23:54.006145  148785 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:23:54.006247  148785 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:23:54.006258  148785 kubeadm.go:310] 
	I1212 01:23:54.006378  148785 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:23:54.006493  148785 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:23:54.006503  148785 kubeadm.go:310] 
	I1212 01:23:54.006634  148785 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vjmv10.w9vmzb1zisszaa3k \
	I1212 01:23:54.006815  148785 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:23:54.006858  148785 kubeadm.go:310] 	--control-plane 
	I1212 01:23:54.006868  148785 kubeadm.go:310] 
	I1212 01:23:54.007014  148785 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:23:54.007037  148785 kubeadm.go:310] 
	I1212 01:23:54.007150  148785 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vjmv10.w9vmzb1zisszaa3k \
	I1212 01:23:54.007299  148785 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:23:54.007775  148785 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:23:54.007905  148785 cni.go:84] Creating CNI manager for ""
	I1212 01:23:54.007924  148785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:23:54.009588  148785 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:23:49.114738  149302 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 01:23:49.114911  149302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:49.114965  149302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:49.129749  149302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45757
	I1212 01:23:49.130256  149302 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:49.130852  149302 main.go:141] libmachine: Using API Version  1
	I1212 01:23:49.130871  149302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:49.131208  149302 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:49.131395  149302 main.go:141] libmachine: (auto-018985) Calling .GetMachineName
	I1212 01:23:49.131577  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:23:49.131737  149302 start.go:159] libmachine.API.Create for "auto-018985" (driver="kvm2")
	I1212 01:23:49.131768  149302 client.go:168] LocalClient.Create starting
	I1212 01:23:49.131801  149302 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 01:23:49.131848  149302 main.go:141] libmachine: Decoding PEM data...
	I1212 01:23:49.131870  149302 main.go:141] libmachine: Parsing certificate...
	I1212 01:23:49.131948  149302 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 01:23:49.131975  149302 main.go:141] libmachine: Decoding PEM data...
	I1212 01:23:49.131994  149302 main.go:141] libmachine: Parsing certificate...
	I1212 01:23:49.132017  149302 main.go:141] libmachine: Running pre-create checks...
	I1212 01:23:49.132034  149302 main.go:141] libmachine: (auto-018985) Calling .PreCreateCheck
	I1212 01:23:49.132363  149302 main.go:141] libmachine: (auto-018985) Calling .GetConfigRaw
	I1212 01:23:49.132789  149302 main.go:141] libmachine: Creating machine...
	I1212 01:23:49.132805  149302 main.go:141] libmachine: (auto-018985) Calling .Create
	I1212 01:23:49.132966  149302 main.go:141] libmachine: (auto-018985) Creating KVM machine...
	I1212 01:23:49.134226  149302 main.go:141] libmachine: (auto-018985) DBG | found existing default KVM network
	I1212 01:23:49.135560  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.135377  149327 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:d6:61} reservation:<nil>}
	I1212 01:23:49.136282  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.136202  149327 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:2f:c1} reservation:<nil>}
	I1212 01:23:49.137317  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.137232  149327 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00038a770}
	I1212 01:23:49.137343  149302 main.go:141] libmachine: (auto-018985) DBG | created network xml: 
	I1212 01:23:49.137355  149302 main.go:141] libmachine: (auto-018985) DBG | <network>
	I1212 01:23:49.137362  149302 main.go:141] libmachine: (auto-018985) DBG |   <name>mk-auto-018985</name>
	I1212 01:23:49.137376  149302 main.go:141] libmachine: (auto-018985) DBG |   <dns enable='no'/>
	I1212 01:23:49.137389  149302 main.go:141] libmachine: (auto-018985) DBG |   
	I1212 01:23:49.137401  149302 main.go:141] libmachine: (auto-018985) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1212 01:23:49.137405  149302 main.go:141] libmachine: (auto-018985) DBG |     <dhcp>
	I1212 01:23:49.137414  149302 main.go:141] libmachine: (auto-018985) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1212 01:23:49.137460  149302 main.go:141] libmachine: (auto-018985) DBG |     </dhcp>
	I1212 01:23:49.137474  149302 main.go:141] libmachine: (auto-018985) DBG |   </ip>
	I1212 01:23:49.137480  149302 main.go:141] libmachine: (auto-018985) DBG |   
	I1212 01:23:49.137487  149302 main.go:141] libmachine: (auto-018985) DBG | </network>
	I1212 01:23:49.137493  149302 main.go:141] libmachine: (auto-018985) DBG | 
	I1212 01:23:49.143209  149302 main.go:141] libmachine: (auto-018985) DBG | trying to create private KVM network mk-auto-018985 192.168.61.0/24...
	I1212 01:23:49.218002  149302 main.go:141] libmachine: (auto-018985) DBG | private KVM network mk-auto-018985 192.168.61.0/24 created
	I1212 01:23:49.218050  149302 main.go:141] libmachine: (auto-018985) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985 ...
	I1212 01:23:49.218071  149302 main.go:141] libmachine: (auto-018985) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 01:23:49.218139  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.217951  149327 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:23:49.218215  149302 main.go:141] libmachine: (auto-018985) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 01:23:49.498373  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.498180  149327 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa...
	I1212 01:23:49.602785  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.602624  149327 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/auto-018985.rawdisk...
	I1212 01:23:49.602825  149302 main.go:141] libmachine: (auto-018985) DBG | Writing magic tar header
	I1212 01:23:49.602840  149302 main.go:141] libmachine: (auto-018985) DBG | Writing SSH key tar header
	I1212 01:23:49.602853  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:49.602779  149327 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985 ...
	I1212 01:23:49.602947  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985
	I1212 01:23:49.602969  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 01:23:49.602982  149302 main.go:141] libmachine: (auto-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985 (perms=drwx------)
	I1212 01:23:49.602996  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:23:49.603010  149302 main.go:141] libmachine: (auto-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 01:23:49.603040  149302 main.go:141] libmachine: (auto-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 01:23:49.603051  149302 main.go:141] libmachine: (auto-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 01:23:49.603063  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 01:23:49.603076  149302 main.go:141] libmachine: (auto-018985) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 01:23:49.603085  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 01:23:49.603092  149302 main.go:141] libmachine: (auto-018985) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 01:23:49.603104  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home/jenkins
	I1212 01:23:49.603119  149302 main.go:141] libmachine: (auto-018985) DBG | Checking permissions on dir: /home
	I1212 01:23:49.603131  149302 main.go:141] libmachine: (auto-018985) DBG | Skipping /home - not owner
	I1212 01:23:49.603142  149302 main.go:141] libmachine: (auto-018985) Creating domain...
	I1212 01:23:49.604282  149302 main.go:141] libmachine: (auto-018985) define libvirt domain using xml: 
	I1212 01:23:49.604336  149302 main.go:141] libmachine: (auto-018985) <domain type='kvm'>
	I1212 01:23:49.604351  149302 main.go:141] libmachine: (auto-018985)   <name>auto-018985</name>
	I1212 01:23:49.604362  149302 main.go:141] libmachine: (auto-018985)   <memory unit='MiB'>3072</memory>
	I1212 01:23:49.604371  149302 main.go:141] libmachine: (auto-018985)   <vcpu>2</vcpu>
	I1212 01:23:49.604385  149302 main.go:141] libmachine: (auto-018985)   <features>
	I1212 01:23:49.604394  149302 main.go:141] libmachine: (auto-018985)     <acpi/>
	I1212 01:23:49.604408  149302 main.go:141] libmachine: (auto-018985)     <apic/>
	I1212 01:23:49.604417  149302 main.go:141] libmachine: (auto-018985)     <pae/>
	I1212 01:23:49.604425  149302 main.go:141] libmachine: (auto-018985)     
	I1212 01:23:49.604434  149302 main.go:141] libmachine: (auto-018985)   </features>
	I1212 01:23:49.604442  149302 main.go:141] libmachine: (auto-018985)   <cpu mode='host-passthrough'>
	I1212 01:23:49.604458  149302 main.go:141] libmachine: (auto-018985)   
	I1212 01:23:49.604469  149302 main.go:141] libmachine: (auto-018985)   </cpu>
	I1212 01:23:49.604482  149302 main.go:141] libmachine: (auto-018985)   <os>
	I1212 01:23:49.604493  149302 main.go:141] libmachine: (auto-018985)     <type>hvm</type>
	I1212 01:23:49.604500  149302 main.go:141] libmachine: (auto-018985)     <boot dev='cdrom'/>
	I1212 01:23:49.604510  149302 main.go:141] libmachine: (auto-018985)     <boot dev='hd'/>
	I1212 01:23:49.604520  149302 main.go:141] libmachine: (auto-018985)     <bootmenu enable='no'/>
	I1212 01:23:49.604528  149302 main.go:141] libmachine: (auto-018985)   </os>
	I1212 01:23:49.604537  149302 main.go:141] libmachine: (auto-018985)   <devices>
	I1212 01:23:49.604546  149302 main.go:141] libmachine: (auto-018985)     <disk type='file' device='cdrom'>
	I1212 01:23:49.604560  149302 main.go:141] libmachine: (auto-018985)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/boot2docker.iso'/>
	I1212 01:23:49.604574  149302 main.go:141] libmachine: (auto-018985)       <target dev='hdc' bus='scsi'/>
	I1212 01:23:49.604585  149302 main.go:141] libmachine: (auto-018985)       <readonly/>
	I1212 01:23:49.604594  149302 main.go:141] libmachine: (auto-018985)     </disk>
	I1212 01:23:49.604603  149302 main.go:141] libmachine: (auto-018985)     <disk type='file' device='disk'>
	I1212 01:23:49.604615  149302 main.go:141] libmachine: (auto-018985)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 01:23:49.604630  149302 main.go:141] libmachine: (auto-018985)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/auto-018985.rawdisk'/>
	I1212 01:23:49.604651  149302 main.go:141] libmachine: (auto-018985)       <target dev='hda' bus='virtio'/>
	I1212 01:23:49.604662  149302 main.go:141] libmachine: (auto-018985)     </disk>
	I1212 01:23:49.604669  149302 main.go:141] libmachine: (auto-018985)     <interface type='network'>
	I1212 01:23:49.604683  149302 main.go:141] libmachine: (auto-018985)       <source network='mk-auto-018985'/>
	I1212 01:23:49.604693  149302 main.go:141] libmachine: (auto-018985)       <model type='virtio'/>
	I1212 01:23:49.604701  149302 main.go:141] libmachine: (auto-018985)     </interface>
	I1212 01:23:49.604710  149302 main.go:141] libmachine: (auto-018985)     <interface type='network'>
	I1212 01:23:49.604719  149302 main.go:141] libmachine: (auto-018985)       <source network='default'/>
	I1212 01:23:49.604732  149302 main.go:141] libmachine: (auto-018985)       <model type='virtio'/>
	I1212 01:23:49.604765  149302 main.go:141] libmachine: (auto-018985)     </interface>
	I1212 01:23:49.604786  149302 main.go:141] libmachine: (auto-018985)     <serial type='pty'>
	I1212 01:23:49.604813  149302 main.go:141] libmachine: (auto-018985)       <target port='0'/>
	I1212 01:23:49.604824  149302 main.go:141] libmachine: (auto-018985)     </serial>
	I1212 01:23:49.604833  149302 main.go:141] libmachine: (auto-018985)     <console type='pty'>
	I1212 01:23:49.604841  149302 main.go:141] libmachine: (auto-018985)       <target type='serial' port='0'/>
	I1212 01:23:49.604849  149302 main.go:141] libmachine: (auto-018985)     </console>
	I1212 01:23:49.604857  149302 main.go:141] libmachine: (auto-018985)     <rng model='virtio'>
	I1212 01:23:49.604885  149302 main.go:141] libmachine: (auto-018985)       <backend model='random'>/dev/random</backend>
	I1212 01:23:49.604903  149302 main.go:141] libmachine: (auto-018985)     </rng>
	I1212 01:23:49.604915  149302 main.go:141] libmachine: (auto-018985)     
	I1212 01:23:49.604931  149302 main.go:141] libmachine: (auto-018985)     
	I1212 01:23:49.604955  149302 main.go:141] libmachine: (auto-018985)   </devices>
	I1212 01:23:49.604969  149302 main.go:141] libmachine: (auto-018985) </domain>
	I1212 01:23:49.605005  149302 main.go:141] libmachine: (auto-018985) 
	I1212 01:23:49.609347  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:96:2d:dc in network default
	I1212 01:23:49.609983  149302 main.go:141] libmachine: (auto-018985) Ensuring networks are active...
	I1212 01:23:49.610007  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:49.610641  149302 main.go:141] libmachine: (auto-018985) Ensuring network default is active
	I1212 01:23:49.610955  149302 main.go:141] libmachine: (auto-018985) Ensuring network mk-auto-018985 is active
	I1212 01:23:49.611521  149302 main.go:141] libmachine: (auto-018985) Getting domain xml...
	I1212 01:23:49.612415  149302 main.go:141] libmachine: (auto-018985) Creating domain...
	I1212 01:23:50.957693  149302 main.go:141] libmachine: (auto-018985) Waiting to get IP...
	I1212 01:23:50.958690  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:50.959136  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:50.959175  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:50.959098  149327 retry.go:31] will retry after 282.563256ms: waiting for machine to come up
	I1212 01:23:51.243538  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:51.244079  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:51.244102  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:51.244032  149327 retry.go:31] will retry after 334.448102ms: waiting for machine to come up
	I1212 01:23:51.580789  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:51.581356  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:51.581397  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:51.581296  149327 retry.go:31] will retry after 360.048636ms: waiting for machine to come up
	I1212 01:23:51.942961  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:51.943496  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:51.943532  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:51.943440  149327 retry.go:31] will retry after 591.424343ms: waiting for machine to come up
	I1212 01:23:52.536277  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:52.536730  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:52.536752  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:52.536691  149327 retry.go:31] will retry after 616.721347ms: waiting for machine to come up
	I1212 01:23:53.155772  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:53.156319  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:53.156347  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:53.156270  149327 retry.go:31] will retry after 946.992297ms: waiting for machine to come up
	I1212 01:23:54.010860  148785 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:23:54.025205  148785 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:23:54.049080  148785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:23:54.049233  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:54.049236  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-819544 minikube.k8s.io/updated_at=2024_12_12T01_23_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=newest-cni-819544 minikube.k8s.io/primary=true
	I1212 01:23:54.285495  148785 ops.go:34] apiserver oom_adj: -16
	I1212 01:23:54.285526  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:54.786537  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:55.285797  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:55.786433  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:56.286198  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:56.785665  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:57.286495  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:57.785657  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:58.286186  148785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:23:58.432959  148785 kubeadm.go:1113] duration metric: took 4.383793551s to wait for elevateKubeSystemPrivileges
	I1212 01:23:58.433003  148785 kubeadm.go:394] duration metric: took 15.356840845s to StartCluster
	I1212 01:23:58.433026  148785 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:58.433108  148785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:23:58.435548  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:58.435831  148785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 01:23:58.435854  148785 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:23:58.435903  148785 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:23:58.436032  148785 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-819544"
	I1212 01:23:58.436077  148785 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-819544"
	I1212 01:23:58.436081  148785 addons.go:69] Setting default-storageclass=true in profile "newest-cni-819544"
	I1212 01:23:58.436113  148785 host.go:66] Checking if "newest-cni-819544" exists ...
	I1212 01:23:58.436110  148785 config.go:182] Loaded profile config "newest-cni-819544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:58.436115  148785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-819544"
	I1212 01:23:58.436616  148785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:58.436668  148785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:58.436747  148785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:58.436779  148785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:58.437663  148785 out.go:177] * Verifying Kubernetes components...
	I1212 01:23:58.439127  148785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:58.452782  148785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I1212 01:23:58.452848  148785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I1212 01:23:58.453259  148785 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:58.453367  148785 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:58.453897  148785 main.go:141] libmachine: Using API Version  1
	I1212 01:23:58.453917  148785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:58.454055  148785 main.go:141] libmachine: Using API Version  1
	I1212 01:23:58.454081  148785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:58.454258  148785 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:58.454441  148785 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:58.454506  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetState
	I1212 01:23:58.454967  148785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:58.454991  148785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:58.458168  148785 addons.go:234] Setting addon default-storageclass=true in "newest-cni-819544"
	I1212 01:23:58.458203  148785 host.go:66] Checking if "newest-cni-819544" exists ...
	I1212 01:23:58.458498  148785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:58.458520  148785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:58.478791  148785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45775
	I1212 01:23:58.478827  148785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37969
	I1212 01:23:58.479293  148785 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:58.479339  148785 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:58.479860  148785 main.go:141] libmachine: Using API Version  1
	I1212 01:23:58.479875  148785 main.go:141] libmachine: Using API Version  1
	I1212 01:23:58.479879  148785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:58.479903  148785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:58.480304  148785 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:58.480306  148785 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:58.480497  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetState
	I1212 01:23:58.480793  148785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:58.480814  148785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:58.482643  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:58.484913  148785 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:23:54.104559  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:54.105052  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:54.105082  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:54.105008  149327 retry.go:31] will retry after 1.078000238s: waiting for machine to come up
	I1212 01:23:55.184592  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:55.185077  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:55.185102  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:55.185039  149327 retry.go:31] will retry after 1.258534438s: waiting for machine to come up
	I1212 01:23:56.444806  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:56.445257  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:56.445285  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:56.445224  149327 retry.go:31] will retry after 1.27924847s: waiting for machine to come up
	I1212 01:23:57.725774  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:57.726157  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:57.726182  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:57.726097  149327 retry.go:31] will retry after 1.53138727s: waiting for machine to come up
	I1212 01:23:58.486213  148785 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:23:58.486228  148785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:23:58.486250  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:58.488814  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:58.489247  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:58.489281  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:58.489417  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:58.489544  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:58.489670  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:58.489759  148785 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa Username:docker}
	I1212 01:23:58.501288  148785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34953
	I1212 01:23:58.501691  148785 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:58.502259  148785 main.go:141] libmachine: Using API Version  1
	I1212 01:23:58.502280  148785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:58.502616  148785 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:58.502796  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetState
	I1212 01:23:58.504597  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:58.504832  148785 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:23:58.504849  148785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:23:58.504866  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:58.508174  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:58.508517  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:58.508536  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:58.508696  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:58.508811  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:58.508967  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:58.509039  148785 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa Username:docker}
	I1212 01:23:58.787351  148785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:23:58.787398  148785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 01:23:58.898472  148785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:23:59.002730  148785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:23:59.674836  148785 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1212 01:23:59.674910  148785 main.go:141] libmachine: Making call to close driver server
	I1212 01:23:59.674932  148785 main.go:141] libmachine: (newest-cni-819544) Calling .Close
	I1212 01:23:59.675368  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Closing plugin on server side
	I1212 01:23:59.675454  148785 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:23:59.675465  148785 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:23:59.675473  148785 main.go:141] libmachine: Making call to close driver server
	I1212 01:23:59.675482  148785 main.go:141] libmachine: (newest-cni-819544) Calling .Close
	I1212 01:23:59.675757  148785 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:23:59.675788  148785 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:23:59.676691  148785 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:23:59.677197  148785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:23:59.722104  148785 main.go:141] libmachine: Making call to close driver server
	I1212 01:23:59.722133  148785 main.go:141] libmachine: (newest-cni-819544) Calling .Close
	I1212 01:23:59.722576  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Closing plugin on server side
	I1212 01:23:59.722639  148785 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:23:59.722660  148785 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:00.182213  148785 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-819544" context rescaled to 1 replicas
	I1212 01:24:00.309575  148785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.306781333s)
	I1212 01:24:00.309643  148785 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:00.309696  148785 main.go:141] libmachine: (newest-cni-819544) Calling .Close
	I1212 01:24:00.309694  148785 api_server.go:72] duration metric: took 1.873804731s to wait for apiserver process to appear ...
	I1212 01:24:00.309717  148785 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:24:00.309761  148785 api_server.go:253] Checking apiserver healthz at https://192.168.72.217:8443/healthz ...
	I1212 01:24:00.310137  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Closing plugin on server side
	I1212 01:24:00.310172  148785 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:00.310187  148785 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:00.310208  148785 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:00.310216  148785 main.go:141] libmachine: (newest-cni-819544) Calling .Close
	I1212 01:24:00.310548  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Closing plugin on server side
	I1212 01:24:00.312034  148785 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:00.312056  148785 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:00.314675  148785 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 01:24:00.316000  148785 addons.go:510] duration metric: took 1.880096285s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 01:24:00.321688  148785 api_server.go:279] https://192.168.72.217:8443/healthz returned 200:
	ok
	I1212 01:24:00.331203  148785 api_server.go:141] control plane version: v1.31.2
	I1212 01:24:00.331239  148785 api_server.go:131] duration metric: took 21.513611ms to wait for apiserver health ...
	I1212 01:24:00.331261  148785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:24:00.344217  148785 system_pods.go:59] 8 kube-system pods found
	I1212 01:24:00.344253  148785 system_pods.go:61] "coredns-7c65d6cfc9-2zsc9" [b28bc3e7-d579-4f76-8be0-f95369bd14f3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:24:00.344263  148785 system_pods.go:61] "coredns-7c65d6cfc9-hc24d" [a5820bba-6ec2-401e-a6f6-d1e42783bbc1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:24:00.344274  148785 system_pods.go:61] "etcd-newest-cni-819544" [8229c494-9599-46bf-8eb8-c43639c04692] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:24:00.344280  148785 system_pods.go:61] "kube-apiserver-newest-cni-819544" [cf6ee9f0-ae27-4f7d-a1e3-595d86b4db87] Running
	I1212 01:24:00.344291  148785 system_pods.go:61] "kube-controller-manager-newest-cni-819544" [776cccad-97d1-4695-ae0d-23b0dde0dc6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:24:00.344297  148785 system_pods.go:61] "kube-proxy-hp9mg" [a1cf085c-1149-449d-93e1-c739dfe95acb] Running
	I1212 01:24:00.344305  148785 system_pods.go:61] "kube-scheduler-newest-cni-819544" [5bde80c1-b5c2-4c2d-874e-13e64c429cfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:24:00.344309  148785 system_pods.go:61] "storage-provisioner" [9b4b1729-caaf-4fcb-9efa-e940df3b1b01] Pending
	I1212 01:24:00.344316  148785 system_pods.go:74] duration metric: took 13.048122ms to wait for pod list to return data ...
	I1212 01:24:00.344323  148785 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:24:00.353650  148785 default_sa.go:45] found service account: "default"
	I1212 01:24:00.353674  148785 default_sa.go:55] duration metric: took 9.345055ms for default service account to be created ...
	I1212 01:24:00.353687  148785 kubeadm.go:582] duration metric: took 1.917804223s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:24:00.353702  148785 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:24:00.360941  148785 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:24:00.360981  148785 node_conditions.go:123] node cpu capacity is 2
	I1212 01:24:00.361038  148785 node_conditions.go:105] duration metric: took 7.330011ms to run NodePressure ...
	I1212 01:24:00.361062  148785 start.go:241] waiting for startup goroutines ...
	I1212 01:24:00.361071  148785 start.go:246] waiting for cluster config update ...
	I1212 01:24:00.361085  148785 start.go:255] writing updated cluster config ...
	I1212 01:24:00.361433  148785 ssh_runner.go:195] Run: rm -f paused
	I1212 01:24:00.433942  148785 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:24:00.435748  148785 out.go:177] * Done! kubectl is now configured to use "newest-cni-819544" cluster and "default" namespace by default
	I1212 01:23:59.258955  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:23:59.259473  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:23:59.259506  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:23:59.259409  149327 retry.go:31] will retry after 2.905177963s: waiting for machine to come up
	I1212 01:24:02.166286  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:02.166742  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:24:02.166774  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:24:02.166685  149327 retry.go:31] will retry after 3.399538712s: waiting for machine to come up
	I1212 01:24:05.567959  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:05.568429  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:24:05.568457  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:24:05.568383  149327 retry.go:31] will retry after 3.13225199s: waiting for machine to come up
	I1212 01:24:08.701751  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:08.702230  149302 main.go:141] libmachine: (auto-018985) DBG | unable to find current IP address of domain auto-018985 in network mk-auto-018985
	I1212 01:24:08.702253  149302 main.go:141] libmachine: (auto-018985) DBG | I1212 01:24:08.702190  149327 retry.go:31] will retry after 4.999081734s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.848354948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966653848314246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1c27004-4b56-4439-aef5-8e52c069a5cb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.849158337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d24553c-d620-44e1-b140-edc7708ef7ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.849238938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d24553c-d620-44e1-b140-edc7708ef7ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.849517388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d24553c-d620-44e1-b140-edc7708ef7ae name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.904066259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=782a1a27-855c-4c60-a354-7072ec92e364 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.904160193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=782a1a27-855c-4c60-a354-7072ec92e364 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.905832937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efded3d8-75f7-47f0-9933-46761a0e3ddf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.906357577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966653906335190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efded3d8-75f7-47f0-9933-46761a0e3ddf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.906981649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ccc7925-76d6-4b1c-b798-9284d6dd9434 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.907037197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ccc7925-76d6-4b1c-b798-9284d6dd9434 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.907244396Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ccc7925-76d6-4b1c-b798-9284d6dd9434 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.946186242Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=64b0b868-0913-44a1-b523-e5d4a0339432 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.946257652Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=64b0b868-0913-44a1-b523-e5d4a0339432 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.947526599Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=11f52a75-1ba2-4b09-a69e-57715311f827 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.947895751Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966653947875498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=11f52a75-1ba2-4b09-a69e-57715311f827 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.948594581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=316be593-ddfd-42a2-bd1b-3bb8fc30f25c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.948645455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=316be593-ddfd-42a2-bd1b-3bb8fc30f25c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.948828578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=316be593-ddfd-42a2-bd1b-3bb8fc30f25c name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.988978943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=993856be-1c0e-4a17-ad5c-658bb733cae8 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.989052862Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=993856be-1c0e-4a17-ad5c-658bb733cae8 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.990579733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eead6ff9-2c79-4a41-9960-132d48cd6ee4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.991052789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966653991029851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eead6ff9-2c79-4a41-9960-132d48cd6ee4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.992159585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82d44596-0392-4773-bc4b-aa9c33a0a0ee name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.992212351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82d44596-0392-4773-bc4b-aa9c33a0a0ee name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:24:13 embed-certs-607268 crio[724]: time="2024-12-12 01:24:13.992404753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b,PodSandboxId:7bd5c6035c545d5cccefe7a23c8bb59095348e7bde2c3312f9073a7a2291b45f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965690355000560,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2421890-0e6b-4d0b-8967-6f0103e90996,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480,PodSandboxId:3152ac5313cbeb1a341e22412cee647680766467bbe4e817b73312ae41ee9e74,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689879019165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m7b7f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02e714b4-3e8d-4c9d-90e3-6fba636190fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f,PodSandboxId:f1872f87960f17a9169aac0cee98fe1a8176b117c54c97ee53a1fe3623bcc7cd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965689718052285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m27d6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
420ab7f-7518-41da-a83f-8339380f5bff,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b,PodSandboxId:cd10ed771a7e456c23ae09f355ab3afbb8f4f38f68f2641a3be625fad9289629,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt
:1733965688916169698,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6hw4b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ae27b6f-a174-42eb-96a7-2e94f0f916c1,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253,PodSandboxId:e337ffcde9aa9e106ffaa87bf55f781424a5ed353fd4f336f3f6751ca8e42a31,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965677383463403,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c62310e16a862118a6bb643a830190,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be,PodSandboxId:b2343493f8f8cf4ad4f12a631acd93c417e7116493d6e6c09837f0120170c88a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965677310706185,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8815a42f8d3f4d605ba1f04e219c7be,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a,PodSandboxId:d38a210a7a58ed877c1d362dd5b9b1ece116e85ead2b4e386a551978c741a34c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965677256582083,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54ab2f74aadec385c2404a16c8ac8ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63,PodSandboxId:d5a5e4fb984dc5fb0d14b4c23de214365ada47091c23505e952e736e9dfc6090,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965677201235624,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a,PodSandboxId:5fc634b841058120e7ee64dfe672aec079f49ac69b6aa14f5a9ecf8a51bc7103,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965387912524363,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-607268,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0104cd127d45d05f5754d3d981619cb6,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82d44596-0392-4773-bc4b-aa9c33a0a0ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df0b9bba2d3dc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   7bd5c6035c545       storage-provisioner
	8dbe54cf32496       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   3152ac5313cbe       coredns-7c65d6cfc9-m7b7f
	a661613ef780a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   f1872f87960f1       coredns-7c65d6cfc9-m27d6
	209fac6c58bc9       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   16 minutes ago      Running             kube-proxy                0                   cd10ed771a7e4       kube-proxy-6hw4b
	850c440c91002       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   e337ffcde9aa9       etcd-embed-certs-607268
	cae3fe867e45c       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   b2343493f8f8c       kube-scheduler-embed-certs-607268
	70fdf1afdd3c3       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   2                   d38a210a7a58e       kube-controller-manager-embed-certs-607268
	f6066cda2bc1b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            2                   d5a5e4fb984dc       kube-apiserver-embed-certs-607268
	a83f1d614dc5f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 minutes ago      Exited              kube-apiserver            1                   5fc634b841058       kube-apiserver-embed-certs-607268
	
	
	==> coredns [8dbe54cf324968038ca2a6a82ca851b717c8a74318401c9cdd913829cf5d7480] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a661613ef780af513a5efca32744065af384310f3ff00cc2ca573e801ec6e07f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-607268
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-607268
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=embed-certs-607268
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 01:08:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-607268
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 01:24:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 01:23:30 +0000   Thu, 12 Dec 2024 01:07:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 01:23:30 +0000   Thu, 12 Dec 2024 01:07:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 01:23:30 +0000   Thu, 12 Dec 2024 01:07:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 01:23:30 +0000   Thu, 12 Dec 2024 01:08:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.151
	  Hostname:    embed-certs-607268
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 082bfb4f2b144015b1981937ac6a2f95
	  System UUID:                082bfb4f-2b14-4015-b198-1937ac6a2f95
	  Boot ID:                    c66ba1f4-be69-4247-abed-b8d00f3658f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-m27d6                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-m7b7f                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-607268                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-607268             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-607268    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-6hw4b                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-607268             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-glcnv               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-607268 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node embed-certs-607268 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node embed-certs-607268 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-607268 event: Registered Node embed-certs-607268 in Controller
	
	
	==> dmesg <==
	[  +0.052754] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040937] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.915455] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.755035] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635579] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.378417] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.056286] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064501] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Dec12 01:03] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +0.176305] systemd-fstab-generator[684]: Ignoring "noauto" option for root device
	[  +0.310752] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.281092] systemd-fstab-generator[807]: Ignoring "noauto" option for root device
	[  +0.061656] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.925583] systemd-fstab-generator[928]: Ignoring "noauto" option for root device
	[  +4.576462] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.631969] kauditd_printk_skb: 85 callbacks suppressed
	[Dec12 01:07] systemd-fstab-generator[2603]: Ignoring "noauto" option for root device
	[  +0.069949] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 01:08] systemd-fstab-generator[2920]: Ignoring "noauto" option for root device
	[  +0.069077] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.348118] systemd-fstab-generator[3052]: Ignoring "noauto" option for root device
	[  +0.109448] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.313533] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [850c440c91002f909bd6afacf0b1e34370a95cd085cde18caf05f8c939bfa253] <==
	{"level":"info","ts":"2024-12-12T01:07:58.995021Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:07:58.995256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:07:58.995455Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:07:58.997398Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:07:58.999132Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.151:2379"}
	{"level":"info","ts":"2024-12-12T01:07:59.001015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T01:07:59.001051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-12T01:07:59.001704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:07:59.002642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-12T01:07:59.003018Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"336dbfe96cdae58d","local-member-id":"bb1641fc01920074","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:07:59.003104Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:07:59.003142Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:17:59.045495Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-12-12T01:17:59.054807Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":722,"took":"8.970037ms","hash":3667836608,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2256896,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-12T01:17:59.054864Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3667836608,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-12-12T01:22:59.053652Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":965}
	{"level":"info","ts":"2024-12-12T01:22:59.057819Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":965,"took":"3.442549ms","hash":2877617162,"current-db-size-bytes":2256896,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1581056,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-12T01:22:59.057996Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2877617162,"revision":965,"compact-revision":722}
	{"level":"info","ts":"2024-12-12T01:23:43.617033Z","caller":"traceutil/trace.go:171","msg":"trace[210459862] linearizableReadLoop","detail":"{readStateIndex:1453; appliedIndex:1452; }","duration":"259.578167ms","start":"2024-12-12T01:23:43.357426Z","end":"2024-12-12T01:23:43.617004Z","steps":["trace[210459862] 'read index received'  (duration: 259.36079ms)","trace[210459862] 'applied index is now lower than readState.Index'  (duration: 216.93µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-12T01:23:43.617250Z","caller":"traceutil/trace.go:171","msg":"trace[1700979097] transaction","detail":"{read_only:false; response_revision:1248; number_of_response:1; }","duration":"309.47018ms","start":"2024-12-12T01:23:43.307759Z","end":"2024-12-12T01:23:43.617230Z","steps":["trace[1700979097] 'process raft request'  (duration: 309.070428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-12T01:23:43.617438Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.949873ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-12T01:23:43.617550Z","caller":"traceutil/trace.go:171","msg":"trace[121079077] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1248; }","duration":"260.117033ms","start":"2024-12-12T01:23:43.357419Z","end":"2024-12-12T01:23:43.617536Z","steps":["trace[121079077] 'agreement among raft nodes before linearized reading'  (duration: 259.859154ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-12T01:23:43.617727Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.46611ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-12T01:23:43.617771Z","caller":"traceutil/trace.go:171","msg":"trace[1162115304] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1248; }","duration":"139.518158ms","start":"2024-12-12T01:23:43.478248Z","end":"2024-12-12T01:23:43.617766Z","steps":["trace[1162115304] 'agreement among raft nodes before linearized reading'  (duration: 139.456711ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-12T01:23:43.617950Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-12T01:23:43.307738Z","time spent":"309.569353ms","remote":"127.0.0.1:39948","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1245 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 01:24:14 up 21 min,  0 users,  load average: 0.35, 0.25, 0.20
	Linux embed-certs-607268 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a83f1d614dc5f821c1dc35bcbff0b751b9eaac89bae40fa755615bdd0f2d968a] <==
	W1212 01:07:53.422465       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.433194       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.445855       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.554614       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.630654       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.711240       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.752077       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.767118       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.839788       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.853441       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.898160       1 logging.go:55] [core] [Channel #13 SubChannel #16]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.947202       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:53.957283       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.051074       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.073197       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.090568       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.107465       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.136333       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.276206       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.314174       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.314410       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.329108       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.370190       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.403536       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:07:54.441668       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f6066cda2bc1b172748c47a16e0aa684e6e9b5dcf80b12926b6042fe2faebd63] <==
	I1212 01:21:01.531422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:21:01.531482       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:23:00.530643       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:23:00.531051       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1212 01:23:01.532899       1 handler_proxy.go:99] no RequestInfo found in the context
	W1212 01:23:01.533072       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:23:01.533148       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1212 01:23:01.533256       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:23:01.534345       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:23:01.534429       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:24:01.535066       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:24:01.535157       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:24:01.535283       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:24:01.535319       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:24:01.536499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:24:01.536597       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [70fdf1afdd3c3630276137fd9384426a50331b681eb4bacc2814628667a2210a] <==
	E1212 01:19:07.516618       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:19:08.058558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="101.197µs"
	I1212 01:19:08.065172       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:19:37.524050       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:19:38.074485       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:20:07.531252       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:20:08.084596       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:20:37.538163       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:20:38.095055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:21:07.544764       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:21:08.103723       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:21:37.553991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:21:38.113557       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:22:07.560659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:22:08.121458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:22:37.567110       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:22:38.129497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:23:07.575480       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:23:08.138159       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:23:30.215746       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-607268"
	E1212 01:23:37.582481       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:23:38.148385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:24:06.063027       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="402.574µs"
	E1212 01:24:07.589507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:24:08.156901       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [209fac6c58bc992b49898eb7e06fcfa5ef6e58e0556f51b2ba1e2e397898af0b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1212 01:08:09.396588       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1212 01:08:09.495958       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.151"]
	E1212 01:08:09.496057       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:08:09.594651       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 01:08:09.601028       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:08:09.601095       1 server_linux.go:169] "Using iptables Proxier"
	I1212 01:08:09.605463       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:08:09.605710       1 server.go:483] "Version info" version="v1.31.2"
	I1212 01:08:09.605743       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:08:09.607328       1 config.go:199] "Starting service config controller"
	I1212 01:08:09.607369       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1212 01:08:09.607396       1 config.go:105] "Starting endpoint slice config controller"
	I1212 01:08:09.607400       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1212 01:08:09.607792       1 config.go:328] "Starting node config controller"
	I1212 01:08:09.607827       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1212 01:08:09.708416       1 shared_informer.go:320] Caches are synced for service config
	I1212 01:08:09.708471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1212 01:08:09.708276       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cae3fe867e45c18812f6ff9bd86af92c58dcab4708da4d05bca17e5e53b521be] <==
	W1212 01:08:00.552575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:00.553361       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:00.553398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 01:08:00.553425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:00.552526       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:08:00.553473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.395647       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 01:08:01.395743       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1212 01:08:01.422540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 01:08:01.422639       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.459712       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:08:01.459771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.493498       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 01:08:01.493584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.494644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:01.494697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.547126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1212 01:08:01.547205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.595649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:01.595943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.667362       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 01:08:01.667429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:01.730797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 01:08:01.730852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1212 01:08:04.044056       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 01:23:17 embed-certs-607268 kubelet[2927]: E1212 01:23:17.039350    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:23:23 embed-certs-607268 kubelet[2927]: E1212 01:23:23.295836    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966603295401568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:23 embed-certs-607268 kubelet[2927]: E1212 01:23:23.295894    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966603295401568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:30 embed-certs-607268 kubelet[2927]: E1212 01:23:30.039090    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:23:33 embed-certs-607268 kubelet[2927]: E1212 01:23:33.297695    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966613297409039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:33 embed-certs-607268 kubelet[2927]: E1212 01:23:33.297735    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966613297409039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:43 embed-certs-607268 kubelet[2927]: E1212 01:23:43.299373    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966623298861835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:43 embed-certs-607268 kubelet[2927]: E1212 01:23:43.299417    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966623298861835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:44 embed-certs-607268 kubelet[2927]: E1212 01:23:44.040544    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:23:53 embed-certs-607268 kubelet[2927]: E1212 01:23:53.301724    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966633301156352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:53 embed-certs-607268 kubelet[2927]: E1212 01:23:53.302218    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966633301156352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:55 embed-certs-607268 kubelet[2927]: E1212 01:23:55.053493    2927 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 12 01:23:55 embed-certs-607268 kubelet[2927]: E1212 01:23:55.053565    2927 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 12 01:23:55 embed-certs-607268 kubelet[2927]: E1212 01:23:55.053784    2927 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7p2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-glcnv_kube-system(3c8b3109-dfcf-4329-84ff-a4c5b566b0d3): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 12 01:23:55 embed-certs-607268 kubelet[2927]: E1212 01:23:55.055322    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]: E1212 01:24:03.086062    2927 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]: E1212 01:24:03.304011    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966643303608367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:03 embed-certs-607268 kubelet[2927]: E1212 01:24:03.304050    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966643303608367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:06 embed-certs-607268 kubelet[2927]: E1212 01:24:06.040192    2927 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-glcnv" podUID="3c8b3109-dfcf-4329-84ff-a4c5b566b0d3"
	Dec 12 01:24:13 embed-certs-607268 kubelet[2927]: E1212 01:24:13.305838    2927 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966653305369544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:13 embed-certs-607268 kubelet[2927]: E1212 01:24:13.305887    2927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966653305369544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [df0b9bba2d3dc833fe39c408137568bcfccb0bf37e7fcbbf541b01f173f3d16b] <==
	I1212 01:08:10.446880       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 01:08:10.459601       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 01:08:10.460218       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 01:08:10.475858       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 01:08:10.476062       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-607268_176bbe4b-7797-4d5d-8558-62057adab84e!
	I1212 01:08:10.478820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c38675b-8920-41c0-a3b3-8c11ef2dcf86", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-607268_176bbe4b-7797-4d5d-8558-62057adab84e became leader
	I1212 01:08:10.576625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-607268_176bbe4b-7797-4d5d-8558-62057adab84e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-607268 -n embed-certs-607268
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-607268 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-glcnv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-607268 describe pod metrics-server-6867b74b74-glcnv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-607268 describe pod metrics-server-6867b74b74-glcnv: exit status 1 (61.971395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-glcnv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-607268 describe pod metrics-server-6867b74b74-glcnv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (476.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-12 01:25:32.587616634 +0000 UTC m=+6730.679284244
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-076578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.658µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-076578 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-076578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-076578 logs -n 25: (1.44218467s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:23 UTC |
	| start   | -p newest-cni-819544 --memory=2200 --alsologtostderr   | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:24 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:23 UTC |
	| start   | -p auto-018985 --memory=3072                           | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:25 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-819544             | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:24 UTC | 12 Dec 24 01:24 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-819544                                   | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:24 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 01:24 UTC | 12 Dec 24 01:24 UTC |
	| start   | -p kindnet-018985                                      | kindnet-018985               | jenkins | v1.34.0 | 12 Dec 24 01:24 UTC | 12 Dec 24 01:25 UTC |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-018985 pgrep -a                                | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:25 UTC | 12 Dec 24 01:25 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| ssh     | -p auto-018985 sudo cat                                | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:25 UTC | 12 Dec 24 01:25 UTC |
	|         | /etc/nsswitch.conf                                     |                              |         |         |                     |                     |
	| ssh     | -p auto-018985 sudo cat                                | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:25 UTC | 12 Dec 24 01:25 UTC |
	|         | /etc/hosts                                             |                              |         |         |                     |                     |
	| ssh     | -p auto-018985 sudo cat                                | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:25 UTC | 12 Dec 24 01:25 UTC |
	|         | /etc/resolv.conf                                       |                              |         |         |                     |                     |
	| ssh     | -p auto-018985 sudo crictl                             | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:25 UTC | 12 Dec 24 01:25 UTC |
	|         | pods                                                   |                              |         |         |                     |                     |
	| ssh     | -p auto-018985 sudo crictl ps                          | auto-018985                  | jenkins | v1.34.0 | 12 Dec 24 01:25 UTC | 12 Dec 24 01:25 UTC |
	|         | --all                                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 01:24:16
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:24:16.333918  149886 out.go:345] Setting OutFile to fd 1 ...
	I1212 01:24:16.334043  149886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 01:24:16.334053  149886 out.go:358] Setting ErrFile to fd 2...
	I1212 01:24:16.334060  149886 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 01:24:16.334253  149886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 01:24:16.334842  149886 out.go:352] Setting JSON to false
	I1212 01:24:16.335844  149886 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":14798,"bootTime":1733951858,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:24:16.335945  149886 start.go:139] virtualization: kvm guest
	I1212 01:24:16.338163  149886 out.go:177] * [kindnet-018985] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 01:24:16.339780  149886 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 01:24:16.339877  149886 notify.go:220] Checking for updates...
	I1212 01:24:16.342082  149886 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:24:16.343391  149886 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:24:16.344655  149886 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:24:16.345815  149886 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:24:16.347055  149886 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:24:16.348661  149886 config.go:182] Loaded profile config "auto-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:24:16.348773  149886 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:24:16.348900  149886 config.go:182] Loaded profile config "newest-cni-819544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:24:16.349006  149886 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 01:24:16.388188  149886 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 01:24:16.389669  149886 start.go:297] selected driver: kvm2
	I1212 01:24:16.389683  149886 start.go:901] validating driver "kvm2" against <nil>
	I1212 01:24:16.389699  149886 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:24:16.390559  149886 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:24:16.390653  149886 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 01:24:16.406350  149886 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 01:24:16.406412  149886 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1212 01:24:16.406681  149886 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:24:16.406714  149886 cni.go:84] Creating CNI manager for "kindnet"
	I1212 01:24:16.406721  149886 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 01:24:16.406789  149886 start.go:340] cluster config:
	{Name:kindnet-018985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:24:16.406908  149886 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:24:16.408656  149886 out.go:177] * Starting "kindnet-018985" primary control-plane node in "kindnet-018985" cluster
	I1212 01:24:16.409874  149886 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:24:16.409915  149886 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 01:24:16.409925  149886 cache.go:56] Caching tarball of preloaded images
	I1212 01:24:16.409991  149886 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 01:24:16.410003  149886 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 01:24:16.410106  149886 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/config.json ...
	I1212 01:24:16.410126  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/config.json: {Name:mk2d70d7a9ca0b291c359402591fffc441cf7969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:16.410278  149886 start.go:360] acquireMachinesLock for kindnet-018985: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:24:16.410312  149886 start.go:364] duration metric: took 17.775µs to acquireMachinesLock for "kindnet-018985"
	I1212 01:24:16.410336  149886 start.go:93] Provisioning new machine with config: &{Name:kindnet-018985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:kindnet-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:24:16.410401  149886 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 01:24:14.051909  149302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:24:14.051939  149302 main.go:141] libmachine: Detecting the provisioner...
	I1212 01:24:14.051965  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.055281  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.055712  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.055740  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.055972  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:14.056187  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.056333  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.056486  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:14.056694  149302 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:14.056907  149302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I1212 01:24:14.056919  149302 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 01:24:14.181824  149302 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 01:24:14.181921  149302 main.go:141] libmachine: found compatible host: buildroot
	I1212 01:24:14.181934  149302 main.go:141] libmachine: Provisioning with buildroot...
	I1212 01:24:14.181944  149302 main.go:141] libmachine: (auto-018985) Calling .GetMachineName
	I1212 01:24:14.182239  149302 buildroot.go:166] provisioning hostname "auto-018985"
	I1212 01:24:14.182266  149302 main.go:141] libmachine: (auto-018985) Calling .GetMachineName
	I1212 01:24:14.182439  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.184989  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.185341  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.185364  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.185556  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:14.185743  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.185885  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.186011  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:14.186159  149302 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:14.186323  149302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I1212 01:24:14.186339  149302 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-018985 && echo "auto-018985" | sudo tee /etc/hostname
	I1212 01:24:14.322048  149302 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-018985
	
	I1212 01:24:14.322081  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.324961  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.325302  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.325341  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.325478  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:14.325655  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.325831  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.325941  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:14.326109  149302 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:14.326285  149302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I1212 01:24:14.326310  149302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-018985' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-018985/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-018985' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:24:14.453532  149302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:24:14.453563  149302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:24:14.453606  149302 buildroot.go:174] setting up certificates
	I1212 01:24:14.453622  149302 provision.go:84] configureAuth start
	I1212 01:24:14.453636  149302 main.go:141] libmachine: (auto-018985) Calling .GetMachineName
	I1212 01:24:14.453949  149302 main.go:141] libmachine: (auto-018985) Calling .GetIP
	I1212 01:24:14.456669  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.457013  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.457040  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.457204  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.459724  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.460091  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.460118  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.460301  149302 provision.go:143] copyHostCerts
	I1212 01:24:14.460379  149302 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:24:14.460405  149302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:24:14.460494  149302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:24:14.460623  149302 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:24:14.460635  149302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:24:14.460675  149302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:24:14.460762  149302 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:24:14.460772  149302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:24:14.460814  149302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:24:14.460892  149302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.auto-018985 san=[127.0.0.1 192.168.61.183 auto-018985 localhost minikube]
	I1212 01:24:14.523687  149302 provision.go:177] copyRemoteCerts
	I1212 01:24:14.523766  149302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:24:14.523801  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.526402  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.526713  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.526748  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.526988  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:14.527202  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.527380  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:14.527536  149302 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa Username:docker}
	I1212 01:24:14.618575  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:24:14.647539  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 01:24:14.673147  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:24:14.698630  149302 provision.go:87] duration metric: took 244.989484ms to configureAuth
	I1212 01:24:14.698663  149302 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:24:14.698901  149302 config.go:182] Loaded profile config "auto-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:24:14.699004  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.702011  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.702458  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.702489  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.702654  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:14.702843  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.702997  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.703150  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:14.703299  149302 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:14.703470  149302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I1212 01:24:14.703485  149302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:24:14.967224  149302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:24:14.967257  149302 main.go:141] libmachine: Checking connection to Docker...
	I1212 01:24:14.967268  149302 main.go:141] libmachine: (auto-018985) Calling .GetURL
	I1212 01:24:14.968563  149302 main.go:141] libmachine: (auto-018985) DBG | Using libvirt version 6000000
	I1212 01:24:14.970769  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.971139  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.971171  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.971378  149302 main.go:141] libmachine: Docker is up and running!
	I1212 01:24:14.971394  149302 main.go:141] libmachine: Reticulating splines...
	I1212 01:24:14.971404  149302 client.go:171] duration metric: took 25.83962612s to LocalClient.Create
	I1212 01:24:14.971429  149302 start.go:167] duration metric: took 25.839693233s to libmachine.API.Create "auto-018985"
	I1212 01:24:14.971442  149302 start.go:293] postStartSetup for "auto-018985" (driver="kvm2")
	I1212 01:24:14.971455  149302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:24:14.971477  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:14.971771  149302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:24:14.971801  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:14.974012  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.974461  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:14.974490  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:14.974647  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:14.974822  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:14.974957  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:14.975096  149302 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa Username:docker}
	I1212 01:24:15.067810  149302 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:24:15.072529  149302 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:24:15.072557  149302 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:24:15.072625  149302 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:24:15.072736  149302 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:24:15.072859  149302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:24:15.085448  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:24:15.113402  149302 start.go:296] duration metric: took 141.943ms for postStartSetup
	I1212 01:24:15.113463  149302 main.go:141] libmachine: (auto-018985) Calling .GetConfigRaw
	I1212 01:24:15.114194  149302 main.go:141] libmachine: (auto-018985) Calling .GetIP
	I1212 01:24:15.117045  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.117415  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:15.117466  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.117735  149302 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/config.json ...
	I1212 01:24:15.117929  149302 start.go:128] duration metric: took 26.005621652s to createHost
	I1212 01:24:15.117952  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:15.120450  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.120765  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:15.120800  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.120960  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:15.121142  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:15.121310  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:15.121505  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:15.121674  149302 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:15.121880  149302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.183 22 <nil> <nil>}
	I1212 01:24:15.121892  149302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:24:15.244753  149302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733966655.217814221
	
	I1212 01:24:15.244778  149302 fix.go:216] guest clock: 1733966655.217814221
	I1212 01:24:15.244788  149302 fix.go:229] Guest: 2024-12-12 01:24:15.217814221 +0000 UTC Remote: 2024-12-12 01:24:15.117940784 +0000 UTC m=+26.123987051 (delta=99.873437ms)
	I1212 01:24:15.244812  149302 fix.go:200] guest clock delta is within tolerance: 99.873437ms
	I1212 01:24:15.244819  149302 start.go:83] releasing machines lock for "auto-018985", held for 26.13263706s
	I1212 01:24:15.244839  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:15.245127  149302 main.go:141] libmachine: (auto-018985) Calling .GetIP
	I1212 01:24:15.248240  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.248645  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:15.248674  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.248831  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:15.249318  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:15.251712  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:15.251828  149302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:24:15.251873  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:15.251938  149302 ssh_runner.go:195] Run: cat /version.json
	I1212 01:24:15.251965  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:15.254823  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.255068  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.255221  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:15.255299  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.255378  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:15.255405  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:15.255485  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:15.255697  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:15.255700  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:15.255902  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:15.255912  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:15.256078  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:15.256097  149302 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa Username:docker}
	I1212 01:24:15.256235  149302 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa Username:docker}
	I1212 01:24:15.370662  149302 ssh_runner.go:195] Run: systemctl --version
	I1212 01:24:15.377159  149302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:24:15.553445  149302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:24:15.559967  149302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:24:15.560046  149302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:24:15.580780  149302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:24:15.580805  149302 start.go:495] detecting cgroup driver to use...
	I1212 01:24:15.580887  149302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:24:15.600816  149302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:24:15.617696  149302 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:24:15.617766  149302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:24:15.631468  149302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:24:15.645291  149302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:24:15.779135  149302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:24:15.924172  149302 docker.go:233] disabling docker service ...
	I1212 01:24:15.924239  149302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:24:15.939722  149302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:24:15.953101  149302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:24:16.101195  149302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:24:16.221108  149302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:24:16.235737  149302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:24:16.255262  149302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:24:16.255334  149302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.268074  149302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:24:16.268125  149302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.282353  149302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.296379  149302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.308073  149302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:24:16.319420  149302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.331062  149302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.351918  149302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:16.363665  149302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:24:16.374683  149302 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:24:16.374743  149302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:24:16.389641  149302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:24:16.400932  149302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:24:16.522011  149302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:24:16.626769  149302 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:24:16.626847  149302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:24:16.631714  149302 start.go:563] Will wait 60s for crictl version
	I1212 01:24:16.631772  149302 ssh_runner.go:195] Run: which crictl
	I1212 01:24:16.635780  149302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:24:16.678300  149302 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:24:16.678391  149302 ssh_runner.go:195] Run: crio --version
	I1212 01:24:16.708118  149302 ssh_runner.go:195] Run: crio --version
	I1212 01:24:16.738698  149302 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:24:16.740156  149302 main.go:141] libmachine: (auto-018985) Calling .GetIP
	I1212 01:24:16.743250  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:16.743732  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:16.743761  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:16.744039  149302 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:24:16.748623  149302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:24:16.762703  149302 kubeadm.go:883] updating cluster {Name:auto-018985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:auto-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:24:16.762870  149302 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:24:16.762950  149302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:24:16.796281  149302 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:24:16.796366  149302 ssh_runner.go:195] Run: which lz4
	I1212 01:24:16.800983  149302 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:24:16.805398  149302 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:24:16.805429  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:24:18.313988  149302 crio.go:462] duration metric: took 1.513039884s to copy over tarball
	I1212 01:24:18.314054  149302 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:24:16.412893  149886 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1212 01:24:16.413038  149886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:24:16.413100  149886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:24:16.428395  149886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40965
	I1212 01:24:16.428811  149886 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:24:16.429435  149886 main.go:141] libmachine: Using API Version  1
	I1212 01:24:16.429464  149886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:24:16.429908  149886 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:24:16.430138  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetMachineName
	I1212 01:24:16.430311  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:16.430508  149886 start.go:159] libmachine.API.Create for "kindnet-018985" (driver="kvm2")
	I1212 01:24:16.430542  149886 client.go:168] LocalClient.Create starting
	I1212 01:24:16.430575  149886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 01:24:16.430612  149886 main.go:141] libmachine: Decoding PEM data...
	I1212 01:24:16.430627  149886 main.go:141] libmachine: Parsing certificate...
	I1212 01:24:16.430693  149886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 01:24:16.430712  149886 main.go:141] libmachine: Decoding PEM data...
	I1212 01:24:16.430722  149886 main.go:141] libmachine: Parsing certificate...
	I1212 01:24:16.430737  149886 main.go:141] libmachine: Running pre-create checks...
	I1212 01:24:16.430747  149886 main.go:141] libmachine: (kindnet-018985) Calling .PreCreateCheck
	I1212 01:24:16.431181  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetConfigRaw
	I1212 01:24:16.431558  149886 main.go:141] libmachine: Creating machine...
	I1212 01:24:16.431571  149886 main.go:141] libmachine: (kindnet-018985) Calling .Create
	I1212 01:24:16.431728  149886 main.go:141] libmachine: (kindnet-018985) Creating KVM machine...
	I1212 01:24:16.433047  149886 main.go:141] libmachine: (kindnet-018985) DBG | found existing default KVM network
	I1212 01:24:16.434488  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:16.434293  149910 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:d6:61} reservation:<nil>}
	I1212 01:24:16.435655  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:16.435531  149910 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000282940}
	I1212 01:24:16.435682  149886 main.go:141] libmachine: (kindnet-018985) DBG | created network xml: 
	I1212 01:24:16.435693  149886 main.go:141] libmachine: (kindnet-018985) DBG | <network>
	I1212 01:24:16.435701  149886 main.go:141] libmachine: (kindnet-018985) DBG |   <name>mk-kindnet-018985</name>
	I1212 01:24:16.435716  149886 main.go:141] libmachine: (kindnet-018985) DBG |   <dns enable='no'/>
	I1212 01:24:16.435729  149886 main.go:141] libmachine: (kindnet-018985) DBG |   
	I1212 01:24:16.435766  149886 main.go:141] libmachine: (kindnet-018985) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1212 01:24:16.435807  149886 main.go:141] libmachine: (kindnet-018985) DBG |     <dhcp>
	I1212 01:24:16.435827  149886 main.go:141] libmachine: (kindnet-018985) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1212 01:24:16.435842  149886 main.go:141] libmachine: (kindnet-018985) DBG |     </dhcp>
	I1212 01:24:16.435852  149886 main.go:141] libmachine: (kindnet-018985) DBG |   </ip>
	I1212 01:24:16.435863  149886 main.go:141] libmachine: (kindnet-018985) DBG |   
	I1212 01:24:16.435871  149886 main.go:141] libmachine: (kindnet-018985) DBG | </network>
	I1212 01:24:16.435925  149886 main.go:141] libmachine: (kindnet-018985) DBG | 
	I1212 01:24:16.440627  149886 main.go:141] libmachine: (kindnet-018985) DBG | trying to create private KVM network mk-kindnet-018985 192.168.50.0/24...
	I1212 01:24:16.513704  149886 main.go:141] libmachine: (kindnet-018985) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985 ...
	I1212 01:24:16.513745  149886 main.go:141] libmachine: (kindnet-018985) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 01:24:16.513759  149886 main.go:141] libmachine: (kindnet-018985) DBG | private KVM network mk-kindnet-018985 192.168.50.0/24 created
	I1212 01:24:16.513781  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:16.513614  149910 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:24:16.513862  149886 main.go:141] libmachine: (kindnet-018985) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 01:24:16.788445  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:16.788302  149910 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa...
	I1212 01:24:16.956321  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:16.956141  149910 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/kindnet-018985.rawdisk...
	I1212 01:24:16.956369  149886 main.go:141] libmachine: (kindnet-018985) DBG | Writing magic tar header
	I1212 01:24:16.956387  149886 main.go:141] libmachine: (kindnet-018985) DBG | Writing SSH key tar header
	I1212 01:24:16.956403  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:16.956262  149910 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985 ...
	I1212 01:24:16.956416  149886 main.go:141] libmachine: (kindnet-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985 (perms=drwx------)
	I1212 01:24:16.956437  149886 main.go:141] libmachine: (kindnet-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 01:24:16.956450  149886 main.go:141] libmachine: (kindnet-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 01:24:16.956464  149886 main.go:141] libmachine: (kindnet-018985) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 01:24:16.956479  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985
	I1212 01:24:16.956492  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 01:24:16.956502  149886 main.go:141] libmachine: (kindnet-018985) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 01:24:16.956517  149886 main.go:141] libmachine: (kindnet-018985) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 01:24:16.956529  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:24:16.956543  149886 main.go:141] libmachine: (kindnet-018985) Creating domain...
	I1212 01:24:16.956559  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 01:24:16.956567  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 01:24:16.956579  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home/jenkins
	I1212 01:24:16.956587  149886 main.go:141] libmachine: (kindnet-018985) DBG | Checking permissions on dir: /home
	I1212 01:24:16.956600  149886 main.go:141] libmachine: (kindnet-018985) DBG | Skipping /home - not owner
	I1212 01:24:16.957754  149886 main.go:141] libmachine: (kindnet-018985) define libvirt domain using xml: 
	I1212 01:24:16.957795  149886 main.go:141] libmachine: (kindnet-018985) <domain type='kvm'>
	I1212 01:24:16.957807  149886 main.go:141] libmachine: (kindnet-018985)   <name>kindnet-018985</name>
	I1212 01:24:16.957813  149886 main.go:141] libmachine: (kindnet-018985)   <memory unit='MiB'>3072</memory>
	I1212 01:24:16.957821  149886 main.go:141] libmachine: (kindnet-018985)   <vcpu>2</vcpu>
	I1212 01:24:16.957834  149886 main.go:141] libmachine: (kindnet-018985)   <features>
	I1212 01:24:16.957841  149886 main.go:141] libmachine: (kindnet-018985)     <acpi/>
	I1212 01:24:16.957847  149886 main.go:141] libmachine: (kindnet-018985)     <apic/>
	I1212 01:24:16.957867  149886 main.go:141] libmachine: (kindnet-018985)     <pae/>
	I1212 01:24:16.957877  149886 main.go:141] libmachine: (kindnet-018985)     
	I1212 01:24:16.957886  149886 main.go:141] libmachine: (kindnet-018985)   </features>
	I1212 01:24:16.957893  149886 main.go:141] libmachine: (kindnet-018985)   <cpu mode='host-passthrough'>
	I1212 01:24:16.957901  149886 main.go:141] libmachine: (kindnet-018985)   
	I1212 01:24:16.957912  149886 main.go:141] libmachine: (kindnet-018985)   </cpu>
	I1212 01:24:16.957930  149886 main.go:141] libmachine: (kindnet-018985)   <os>
	I1212 01:24:16.957941  149886 main.go:141] libmachine: (kindnet-018985)     <type>hvm</type>
	I1212 01:24:16.957950  149886 main.go:141] libmachine: (kindnet-018985)     <boot dev='cdrom'/>
	I1212 01:24:16.957960  149886 main.go:141] libmachine: (kindnet-018985)     <boot dev='hd'/>
	I1212 01:24:16.957970  149886 main.go:141] libmachine: (kindnet-018985)     <bootmenu enable='no'/>
	I1212 01:24:16.957981  149886 main.go:141] libmachine: (kindnet-018985)   </os>
	I1212 01:24:16.957993  149886 main.go:141] libmachine: (kindnet-018985)   <devices>
	I1212 01:24:16.958004  149886 main.go:141] libmachine: (kindnet-018985)     <disk type='file' device='cdrom'>
	I1212 01:24:16.958020  149886 main.go:141] libmachine: (kindnet-018985)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/boot2docker.iso'/>
	I1212 01:24:16.958031  149886 main.go:141] libmachine: (kindnet-018985)       <target dev='hdc' bus='scsi'/>
	I1212 01:24:16.958042  149886 main.go:141] libmachine: (kindnet-018985)       <readonly/>
	I1212 01:24:16.958048  149886 main.go:141] libmachine: (kindnet-018985)     </disk>
	I1212 01:24:16.958061  149886 main.go:141] libmachine: (kindnet-018985)     <disk type='file' device='disk'>
	I1212 01:24:16.958075  149886 main.go:141] libmachine: (kindnet-018985)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 01:24:16.958091  149886 main.go:141] libmachine: (kindnet-018985)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/kindnet-018985.rawdisk'/>
	I1212 01:24:16.958103  149886 main.go:141] libmachine: (kindnet-018985)       <target dev='hda' bus='virtio'/>
	I1212 01:24:16.958120  149886 main.go:141] libmachine: (kindnet-018985)     </disk>
	I1212 01:24:16.958132  149886 main.go:141] libmachine: (kindnet-018985)     <interface type='network'>
	I1212 01:24:16.958145  149886 main.go:141] libmachine: (kindnet-018985)       <source network='mk-kindnet-018985'/>
	I1212 01:24:16.958159  149886 main.go:141] libmachine: (kindnet-018985)       <model type='virtio'/>
	I1212 01:24:16.958172  149886 main.go:141] libmachine: (kindnet-018985)     </interface>
	I1212 01:24:16.958182  149886 main.go:141] libmachine: (kindnet-018985)     <interface type='network'>
	I1212 01:24:16.958193  149886 main.go:141] libmachine: (kindnet-018985)       <source network='default'/>
	I1212 01:24:16.958203  149886 main.go:141] libmachine: (kindnet-018985)       <model type='virtio'/>
	I1212 01:24:16.958213  149886 main.go:141] libmachine: (kindnet-018985)     </interface>
	I1212 01:24:16.958223  149886 main.go:141] libmachine: (kindnet-018985)     <serial type='pty'>
	I1212 01:24:16.958233  149886 main.go:141] libmachine: (kindnet-018985)       <target port='0'/>
	I1212 01:24:16.958243  149886 main.go:141] libmachine: (kindnet-018985)     </serial>
	I1212 01:24:16.958253  149886 main.go:141] libmachine: (kindnet-018985)     <console type='pty'>
	I1212 01:24:16.958264  149886 main.go:141] libmachine: (kindnet-018985)       <target type='serial' port='0'/>
	I1212 01:24:16.958276  149886 main.go:141] libmachine: (kindnet-018985)     </console>
	I1212 01:24:16.958294  149886 main.go:141] libmachine: (kindnet-018985)     <rng model='virtio'>
	I1212 01:24:16.958309  149886 main.go:141] libmachine: (kindnet-018985)       <backend model='random'>/dev/random</backend>
	I1212 01:24:16.958325  149886 main.go:141] libmachine: (kindnet-018985)     </rng>
	I1212 01:24:16.958337  149886 main.go:141] libmachine: (kindnet-018985)     
	I1212 01:24:16.958345  149886 main.go:141] libmachine: (kindnet-018985)     
	I1212 01:24:16.958357  149886 main.go:141] libmachine: (kindnet-018985)   </devices>
	I1212 01:24:16.958364  149886 main.go:141] libmachine: (kindnet-018985) </domain>
	I1212 01:24:16.958377  149886 main.go:141] libmachine: (kindnet-018985) 
	I1212 01:24:16.963678  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:fd:50:f4 in network default
	I1212 01:24:16.964391  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:16.964417  149886 main.go:141] libmachine: (kindnet-018985) Ensuring networks are active...
	I1212 01:24:16.965354  149886 main.go:141] libmachine: (kindnet-018985) Ensuring network default is active
	I1212 01:24:16.965715  149886 main.go:141] libmachine: (kindnet-018985) Ensuring network mk-kindnet-018985 is active
	I1212 01:24:16.966509  149886 main.go:141] libmachine: (kindnet-018985) Getting domain xml...
	I1212 01:24:16.967482  149886 main.go:141] libmachine: (kindnet-018985) Creating domain...
	I1212 01:24:18.505374  149886 main.go:141] libmachine: (kindnet-018985) Waiting to get IP...
	I1212 01:24:18.506460  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:18.506889  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:18.506926  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:18.506871  149910 retry.go:31] will retry after 288.994587ms: waiting for machine to come up
	I1212 01:24:18.797776  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:18.798284  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:18.798312  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:18.798241  149910 retry.go:31] will retry after 252.542559ms: waiting for machine to come up
	I1212 01:24:19.052939  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:19.053492  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:19.053529  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:19.053441  149910 retry.go:31] will retry after 330.99788ms: waiting for machine to come up
	I1212 01:24:19.385956  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:19.386438  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:19.386470  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:19.386365  149910 retry.go:31] will retry after 456.433988ms: waiting for machine to come up
	I1212 01:24:19.844029  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:19.844608  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:19.844638  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:19.844554  149910 retry.go:31] will retry after 663.599432ms: waiting for machine to come up
	I1212 01:24:20.509420  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:20.509800  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:20.509826  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:20.509775  149910 retry.go:31] will retry after 913.398744ms: waiting for machine to come up
	I1212 01:24:20.664428  149302 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.35034073s)
	I1212 01:24:20.664465  149302 crio.go:469] duration metric: took 2.350448097s to extract the tarball
	I1212 01:24:20.664476  149302 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:24:20.704054  149302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:24:20.761965  149302 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:24:20.761996  149302 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:24:20.762006  149302 kubeadm.go:934] updating node { 192.168.61.183 8443 v1.31.2 crio true true} ...
	I1212 01:24:20.762140  149302 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-018985 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:auto-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:24:20.762234  149302 ssh_runner.go:195] Run: crio config
	I1212 01:24:20.826867  149302 cni.go:84] Creating CNI manager for ""
	I1212 01:24:20.826897  149302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:24:20.826912  149302 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:24:20.826941  149302 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.183 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-018985 NodeName:auto-018985 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:24:20.827193  149302 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-018985"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:24:20.827301  149302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:24:20.837906  149302 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:24:20.837998  149302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:24:20.849147  149302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1212 01:24:20.870496  149302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:24:20.891682  149302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I1212 01:24:20.909307  149302 ssh_runner.go:195] Run: grep 192.168.61.183	control-plane.minikube.internal$ /etc/hosts
	I1212 01:24:20.913554  149302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:24:20.927601  149302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:24:21.046956  149302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:24:21.064664  149302 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985 for IP: 192.168.61.183
	I1212 01:24:21.064702  149302 certs.go:194] generating shared ca certs ...
	I1212 01:24:21.064733  149302 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.064944  149302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:24:21.065005  149302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:24:21.065019  149302 certs.go:256] generating profile certs ...
	I1212 01:24:21.065100  149302 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/client.key
	I1212 01:24:21.065130  149302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/client.crt with IP's: []
	I1212 01:24:21.229152  149302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/client.crt ...
	I1212 01:24:21.229184  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/client.crt: {Name:mke9d0454f80597756501526fd5fc32d3d3b5f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.229368  149302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/client.key ...
	I1212 01:24:21.229380  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/client.key: {Name:mk0217d174324f16216697ede68df7f61ed053f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.229457  149302 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.key.b46fca90
	I1212 01:24:21.229475  149302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.crt.b46fca90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.183]
	I1212 01:24:21.491647  149302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.crt.b46fca90 ...
	I1212 01:24:21.491678  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.crt.b46fca90: {Name:mk9eb10f729e9ee15e3552af25f21cde43dec014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.491842  149302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.key.b46fca90 ...
	I1212 01:24:21.491855  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.key.b46fca90: {Name:mkc2468cd26c9b35a5cab513ff28136e0e74eb6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.491939  149302 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.crt.b46fca90 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.crt
	I1212 01:24:21.492042  149302 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.key.b46fca90 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.key
	I1212 01:24:21.492101  149302 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.key
	I1212 01:24:21.492119  149302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.crt with IP's: []
	I1212 01:24:21.593174  149302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.crt ...
	I1212 01:24:21.593229  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.crt: {Name:mka44728add33ebd4cfaf99c280f3a52b7eebe0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.593401  149302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.key ...
	I1212 01:24:21.593413  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.key: {Name:mka94852a7e7406aa4506de5e7f1631e8dd9208b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:21.593613  149302 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:24:21.593656  149302 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:24:21.593667  149302 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:24:21.593690  149302 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:24:21.593713  149302 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:24:21.593735  149302 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:24:21.593774  149302 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:24:21.594462  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:24:21.629778  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:24:21.658927  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:24:21.687333  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:24:21.715311  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1212 01:24:21.749533  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:24:21.851205  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:24:21.882523  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/auto-018985/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:24:21.909804  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:24:21.937393  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:24:21.962039  149302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:24:21.991817  149302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:24:22.009182  149302 ssh_runner.go:195] Run: openssl version
	I1212 01:24:22.015311  149302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:24:22.026508  149302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:24:22.031279  149302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:24:22.031339  149302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:24:22.037472  149302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:24:22.048646  149302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:24:22.060533  149302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:24:22.065465  149302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:24:22.065536  149302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:24:22.071761  149302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:24:22.083125  149302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:24:22.095559  149302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:24:22.100686  149302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:24:22.100750  149302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:24:22.106837  149302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:24:22.119185  149302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:24:22.123839  149302 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:24:22.123904  149302 kubeadm.go:392] StartCluster: {Name:auto-018985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clu
sterName:auto-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:24:22.124014  149302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:24:22.124076  149302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:24:22.161656  149302 cri.go:89] found id: ""
	I1212 01:24:22.161744  149302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:24:22.172038  149302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:24:22.182648  149302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:24:22.192514  149302 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:24:22.192534  149302 kubeadm.go:157] found existing configuration files:
	
	I1212 01:24:22.192590  149302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:24:22.202429  149302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:24:22.202497  149302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:24:22.212236  149302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:24:22.222336  149302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:24:22.222397  149302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:24:22.232897  149302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:24:22.242638  149302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:24:22.242711  149302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:24:22.253109  149302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:24:22.262272  149302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:24:22.262356  149302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:24:22.271668  149302 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:24:22.330058  149302 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:24:22.330220  149302 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:24:22.445482  149302 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:24:22.445617  149302 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:24:22.445752  149302 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:24:22.458436  149302 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:24:22.644381  149302 out.go:235]   - Generating certificates and keys ...
	I1212 01:24:22.644509  149302 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:24:22.644611  149302 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:24:22.644763  149302 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:24:22.673367  149302 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:24:22.733496  149302 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:24:22.873268  149302 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1212 01:24:23.197713  149302 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1212 01:24:23.197914  149302 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-018985 localhost] and IPs [192.168.61.183 127.0.0.1 ::1]
	I1212 01:24:23.461182  149302 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1212 01:24:23.461385  149302 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-018985 localhost] and IPs [192.168.61.183 127.0.0.1 ::1]
	I1212 01:24:23.578790  149302 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:24:23.691081  149302 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:24:23.767917  149302 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1212 01:24:23.768080  149302 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:24:23.936525  149302 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:24:24.110343  149302 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:24:24.210895  149302 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:24:24.386469  149302 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:24:24.487757  149302 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:24:24.488368  149302 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:24:24.490829  149302 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:24:21.424395  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:21.424865  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:21.424893  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:21.424808  149910 retry.go:31] will retry after 955.800821ms: waiting for machine to come up
	I1212 01:24:22.382453  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:22.382981  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:22.383013  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:22.382921  149910 retry.go:31] will retry after 1.330870556s: waiting for machine to come up
	I1212 01:24:23.715058  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:23.715588  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:23.715637  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:23.715527  149910 retry.go:31] will retry after 1.783500735s: waiting for machine to come up
	I1212 01:24:25.500757  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:25.501247  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:25.501297  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:25.501193  149910 retry.go:31] will retry after 2.022997686s: waiting for machine to come up
	I1212 01:24:24.492613  149302 out.go:235]   - Booting up control plane ...
	I1212 01:24:24.492731  149302 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:24:24.492853  149302 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:24:24.493002  149302 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:24:24.510561  149302 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:24:24.517483  149302 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:24:24.517550  149302 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:24:24.652417  149302 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:24:24.652583  149302 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:24:25.154689  149302 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.697268ms
	I1212 01:24:25.154793  149302 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:24:30.155458  149302 kubeadm.go:310] [api-check] The API server is healthy after 5.00238729s
	I1212 01:24:30.166452  149302 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:24:30.180695  149302 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:24:30.218728  149302 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:24:30.219027  149302 kubeadm.go:310] [mark-control-plane] Marking the node auto-018985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:24:30.230331  149302 kubeadm.go:310] [bootstrap-token] Using token: wevxt9.gccvrn4cns8jxgl0
	I1212 01:24:27.525377  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:27.525941  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:27.525969  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:27.525894  149910 retry.go:31] will retry after 2.854003392s: waiting for machine to come up
	I1212 01:24:30.381391  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:30.381891  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:30.381913  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:30.381855  149910 retry.go:31] will retry after 3.083146059s: waiting for machine to come up
	I1212 01:24:30.231664  149302 out.go:235]   - Configuring RBAC rules ...
	I1212 01:24:30.231809  149302 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:24:30.236525  149302 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:24:30.243789  149302 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:24:30.246784  149302 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:24:30.250451  149302 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:24:30.257060  149302 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:24:30.575192  149302 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:24:31.004585  149302 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:24:31.562695  149302 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:24:31.562756  149302 kubeadm.go:310] 
	I1212 01:24:31.562884  149302 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:24:31.562908  149302 kubeadm.go:310] 
	I1212 01:24:31.563030  149302 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:24:31.563041  149302 kubeadm.go:310] 
	I1212 01:24:31.563074  149302 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:24:31.563171  149302 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:24:31.563247  149302 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:24:31.563257  149302 kubeadm.go:310] 
	I1212 01:24:31.563340  149302 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:24:31.563352  149302 kubeadm.go:310] 
	I1212 01:24:31.563433  149302 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:24:31.563453  149302 kubeadm.go:310] 
	I1212 01:24:31.563546  149302 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:24:31.563664  149302 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:24:31.563754  149302 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:24:31.563762  149302 kubeadm.go:310] 
	I1212 01:24:31.563873  149302 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:24:31.564015  149302 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:24:31.564033  149302 kubeadm.go:310] 
	I1212 01:24:31.564177  149302 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wevxt9.gccvrn4cns8jxgl0 \
	I1212 01:24:31.564345  149302 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:24:31.564384  149302 kubeadm.go:310] 	--control-plane 
	I1212 01:24:31.564391  149302 kubeadm.go:310] 
	I1212 01:24:31.564521  149302 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:24:31.564567  149302 kubeadm.go:310] 
	I1212 01:24:31.564683  149302 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wevxt9.gccvrn4cns8jxgl0 \
	I1212 01:24:31.564867  149302 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:24:31.565029  149302 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:24:31.565056  149302 cni.go:84] Creating CNI manager for ""
	I1212 01:24:31.565073  149302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:24:31.566973  149302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:24:31.568334  149302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:24:31.580539  149302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:24:31.601708  149302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:24:31.601779  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:31.601797  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-018985 minikube.k8s.io/updated_at=2024_12_12T01_24_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=auto-018985 minikube.k8s.io/primary=true
	I1212 01:24:31.779680  149302 ops.go:34] apiserver oom_adj: -16
	I1212 01:24:31.779686  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:32.280255  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:32.780373  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:33.280526  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:33.779802  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:33.468167  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:33.468681  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:33.468707  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:33.468643  149910 retry.go:31] will retry after 3.964460332s: waiting for machine to come up
	I1212 01:24:34.280730  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:34.780689  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:35.279745  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:35.779738  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:36.279740  149302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:24:36.390369  149302 kubeadm.go:1113] duration metric: took 4.788647805s to wait for elevateKubeSystemPrivileges
	I1212 01:24:36.390418  149302 kubeadm.go:394] duration metric: took 14.266520104s to StartCluster
	I1212 01:24:36.390444  149302 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:36.390543  149302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:24:36.392103  149302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:36.392410  149302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 01:24:36.392416  149302 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.183 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:24:36.392511  149302 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:24:36.392611  149302 addons.go:69] Setting storage-provisioner=true in profile "auto-018985"
	I1212 01:24:36.392628  149302 config.go:182] Loaded profile config "auto-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:24:36.392633  149302 addons.go:69] Setting default-storageclass=true in profile "auto-018985"
	I1212 01:24:36.392682  149302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-018985"
	I1212 01:24:36.392638  149302 addons.go:234] Setting addon storage-provisioner=true in "auto-018985"
	I1212 01:24:36.392831  149302 host.go:66] Checking if "auto-018985" exists ...
	I1212 01:24:36.393231  149302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:24:36.393285  149302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:24:36.393335  149302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:24:36.393415  149302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:24:36.394167  149302 out.go:177] * Verifying Kubernetes components...
	I1212 01:24:36.396149  149302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:24:36.408981  149302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38887
	I1212 01:24:36.409188  149302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
	I1212 01:24:36.409548  149302 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:24:36.409653  149302 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:24:36.410211  149302 main.go:141] libmachine: Using API Version  1
	I1212 01:24:36.410241  149302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:24:36.410369  149302 main.go:141] libmachine: Using API Version  1
	I1212 01:24:36.410393  149302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:24:36.410638  149302 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:24:36.410738  149302 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:24:36.410920  149302 main.go:141] libmachine: (auto-018985) Calling .GetState
	I1212 01:24:36.411248  149302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:24:36.411299  149302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:24:36.414431  149302 addons.go:234] Setting addon default-storageclass=true in "auto-018985"
	I1212 01:24:36.414484  149302 host.go:66] Checking if "auto-018985" exists ...
	I1212 01:24:36.414860  149302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:24:36.414925  149302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:24:36.427266  149302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I1212 01:24:36.427942  149302 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:24:36.428631  149302 main.go:141] libmachine: Using API Version  1
	I1212 01:24:36.428665  149302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:24:36.429088  149302 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:24:36.429318  149302 main.go:141] libmachine: (auto-018985) Calling .GetState
	I1212 01:24:36.429915  149302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I1212 01:24:36.430380  149302 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:24:36.430949  149302 main.go:141] libmachine: Using API Version  1
	I1212 01:24:36.430971  149302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:24:36.431229  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:36.431353  149302 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:24:36.431985  149302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:24:36.432040  149302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:24:36.432886  149302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:24:36.434206  149302 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:24:36.434223  149302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:24:36.434244  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:36.440989  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:36.441393  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:36.441421  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:36.441705  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:36.441887  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:36.442066  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:36.442231  149302 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa Username:docker}
	I1212 01:24:36.449381  149302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38771
	I1212 01:24:36.449841  149302 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:24:36.450441  149302 main.go:141] libmachine: Using API Version  1
	I1212 01:24:36.450466  149302 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:24:36.450787  149302 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:24:36.450976  149302 main.go:141] libmachine: (auto-018985) Calling .GetState
	I1212 01:24:36.452645  149302 main.go:141] libmachine: (auto-018985) Calling .DriverName
	I1212 01:24:36.452851  149302 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:24:36.452869  149302 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:24:36.452891  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHHostname
	I1212 01:24:36.455902  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:36.456325  149302 main.go:141] libmachine: (auto-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:10:2b", ip: ""} in network mk-auto-018985: {Iface:virbr3 ExpiryTime:2024-12-12 02:24:05 +0000 UTC Type:0 Mac:52:54:00:64:10:2b Iaid: IPaddr:192.168.61.183 Prefix:24 Hostname:auto-018985 Clientid:01:52:54:00:64:10:2b}
	I1212 01:24:36.456353  149302 main.go:141] libmachine: (auto-018985) DBG | domain auto-018985 has defined IP address 192.168.61.183 and MAC address 52:54:00:64:10:2b in network mk-auto-018985
	I1212 01:24:36.456483  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHPort
	I1212 01:24:36.456630  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHKeyPath
	I1212 01:24:36.456793  149302 main.go:141] libmachine: (auto-018985) Calling .GetSSHUsername
	I1212 01:24:36.456889  149302 sshutil.go:53] new ssh client: &{IP:192.168.61.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/auto-018985/id_rsa Username:docker}
	I1212 01:24:36.674111  149302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:24:36.674155  149302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 01:24:36.699241  149302 node_ready.go:35] waiting up to 15m0s for node "auto-018985" to be "Ready" ...
	I1212 01:24:36.707924  149302 node_ready.go:49] node "auto-018985" has status "Ready":"True"
	I1212 01:24:36.707952  149302 node_ready.go:38] duration metric: took 8.652283ms for node "auto-018985" to be "Ready" ...
	I1212 01:24:36.707961  149302 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:24:36.718348  149302 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace to be "Ready" ...
	I1212 01:24:36.786703  149302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:24:36.866837  149302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:24:36.986075  149302 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1212 01:24:36.986169  149302 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:36.986193  149302 main.go:141] libmachine: (auto-018985) Calling .Close
	I1212 01:24:36.986498  149302 main.go:141] libmachine: (auto-018985) DBG | Closing plugin on server side
	I1212 01:24:36.986500  149302 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:36.986525  149302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:36.986534  149302 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:36.986550  149302 main.go:141] libmachine: (auto-018985) Calling .Close
	I1212 01:24:36.986783  149302 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:36.986802  149302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:36.999770  149302 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:36.999800  149302 main.go:141] libmachine: (auto-018985) Calling .Close
	I1212 01:24:37.000111  149302 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:37.000168  149302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:37.000177  149302 main.go:141] libmachine: (auto-018985) DBG | Closing plugin on server side
	I1212 01:24:37.349256  149302 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:37.349294  149302 main.go:141] libmachine: (auto-018985) Calling .Close
	I1212 01:24:37.350289  149302 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:37.350305  149302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:37.350312  149302 main.go:141] libmachine: Making call to close driver server
	I1212 01:24:37.350314  149302 main.go:141] libmachine: (auto-018985) DBG | Closing plugin on server side
	I1212 01:24:37.350318  149302 main.go:141] libmachine: (auto-018985) Calling .Close
	I1212 01:24:37.350597  149302 main.go:141] libmachine: (auto-018985) DBG | Closing plugin on server side
	I1212 01:24:37.350650  149302 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:24:37.350668  149302 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:24:37.352603  149302 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 01:24:37.354076  149302 addons.go:510] duration metric: took 961.565092ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 01:24:37.492979  149302 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-018985" context rescaled to 1 replicas
	I1212 01:24:38.725584  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:37.434692  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:37.435107  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find current IP address of domain kindnet-018985 in network mk-kindnet-018985
	I1212 01:24:37.435128  149886 main.go:141] libmachine: (kindnet-018985) DBG | I1212 01:24:37.435062  149910 retry.go:31] will retry after 3.843175078s: waiting for machine to come up
	I1212 01:24:41.279385  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:41.279910  149886 main.go:141] libmachine: (kindnet-018985) Found IP for machine: 192.168.50.69
	I1212 01:24:41.279958  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has current primary IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:41.279967  149886 main.go:141] libmachine: (kindnet-018985) Reserving static IP address...
	I1212 01:24:41.280240  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find host DHCP lease matching {name: "kindnet-018985", mac: "52:54:00:45:05:69", ip: "192.168.50.69"} in network mk-kindnet-018985
	I1212 01:24:40.727883  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:43.225552  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:41.356935  149886 main.go:141] libmachine: (kindnet-018985) DBG | Getting to WaitForSSH function...
	I1212 01:24:41.356971  149886 main.go:141] libmachine: (kindnet-018985) Reserved static IP address: 192.168.50.69
	I1212 01:24:41.356985  149886 main.go:141] libmachine: (kindnet-018985) Waiting for SSH to be available...
	I1212 01:24:41.359819  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:41.360193  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985
	I1212 01:24:41.360219  149886 main.go:141] libmachine: (kindnet-018985) DBG | unable to find defined IP address of network mk-kindnet-018985 interface with MAC address 52:54:00:45:05:69
	I1212 01:24:41.360397  149886 main.go:141] libmachine: (kindnet-018985) DBG | Using SSH client type: external
	I1212 01:24:41.360417  149886 main.go:141] libmachine: (kindnet-018985) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa (-rw-------)
	I1212 01:24:41.360463  149886 main.go:141] libmachine: (kindnet-018985) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:24:41.360481  149886 main.go:141] libmachine: (kindnet-018985) DBG | About to run SSH command:
	I1212 01:24:41.360500  149886 main.go:141] libmachine: (kindnet-018985) DBG | exit 0
	I1212 01:24:41.364323  149886 main.go:141] libmachine: (kindnet-018985) DBG | SSH cmd err, output: exit status 255: 
	I1212 01:24:41.364347  149886 main.go:141] libmachine: (kindnet-018985) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1212 01:24:41.364358  149886 main.go:141] libmachine: (kindnet-018985) DBG | command : exit 0
	I1212 01:24:41.364368  149886 main.go:141] libmachine: (kindnet-018985) DBG | err     : exit status 255
	I1212 01:24:41.364379  149886 main.go:141] libmachine: (kindnet-018985) DBG | output  : 
	I1212 01:24:44.364906  149886 main.go:141] libmachine: (kindnet-018985) DBG | Getting to WaitForSSH function...
	I1212 01:24:44.367171  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.367657  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.367698  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.367785  149886 main.go:141] libmachine: (kindnet-018985) DBG | Using SSH client type: external
	I1212 01:24:44.367816  149886 main.go:141] libmachine: (kindnet-018985) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa (-rw-------)
	I1212 01:24:44.367848  149886 main.go:141] libmachine: (kindnet-018985) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.69 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:24:44.367861  149886 main.go:141] libmachine: (kindnet-018985) DBG | About to run SSH command:
	I1212 01:24:44.367874  149886 main.go:141] libmachine: (kindnet-018985) DBG | exit 0
	I1212 01:24:44.495966  149886 main.go:141] libmachine: (kindnet-018985) DBG | SSH cmd err, output: <nil>: 
	I1212 01:24:44.496221  149886 main.go:141] libmachine: (kindnet-018985) KVM machine creation complete!
	I1212 01:24:44.496607  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetConfigRaw
	I1212 01:24:44.497242  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:44.497445  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:44.497611  149886 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 01:24:44.497627  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetState
	I1212 01:24:44.498784  149886 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 01:24:44.498797  149886 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 01:24:44.498802  149886 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 01:24:44.498807  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:44.500798  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.501165  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.501208  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.501331  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:44.501475  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.501628  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.501741  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:44.501898  149886 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:44.502143  149886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I1212 01:24:44.502158  149886 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 01:24:44.607128  149886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:24:44.607158  149886 main.go:141] libmachine: Detecting the provisioner...
	I1212 01:24:44.607169  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:44.609911  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.610345  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.610378  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.610465  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:44.610663  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.610823  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.610974  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:44.611162  149886 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:44.611351  149886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I1212 01:24:44.611363  149886 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 01:24:44.721081  149886 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 01:24:44.721184  149886 main.go:141] libmachine: found compatible host: buildroot
	I1212 01:24:44.721200  149886 main.go:141] libmachine: Provisioning with buildroot...
	I1212 01:24:44.721216  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetMachineName
	I1212 01:24:44.721486  149886 buildroot.go:166] provisioning hostname "kindnet-018985"
	I1212 01:24:44.721514  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetMachineName
	I1212 01:24:44.721708  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:44.724509  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.724909  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.724947  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.725221  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:44.725411  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.725572  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.725710  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:44.725862  149886 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:44.726014  149886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I1212 01:24:44.726031  149886 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-018985 && echo "kindnet-018985" | sudo tee /etc/hostname
	I1212 01:24:44.847571  149886 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-018985
	
	I1212 01:24:44.847615  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:44.850370  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.850736  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.850781  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.850944  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:44.851135  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.851268  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:44.851422  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:44.851583  149886 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:44.851824  149886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I1212 01:24:44.851843  149886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-018985' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-018985/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-018985' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:24:44.970991  149886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:24:44.971032  149886 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:24:44.971092  149886 buildroot.go:174] setting up certificates
	I1212 01:24:44.971109  149886 provision.go:84] configureAuth start
	I1212 01:24:44.971136  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetMachineName
	I1212 01:24:44.971450  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetIP
	I1212 01:24:44.974293  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.974698  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.974731  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.974848  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:44.977164  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.977543  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:44.977570  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:44.977742  149886 provision.go:143] copyHostCerts
	I1212 01:24:44.977815  149886 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:24:44.977840  149886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:24:44.977919  149886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:24:44.978051  149886 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:24:44.978065  149886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:24:44.978104  149886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:24:44.978207  149886 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:24:44.978220  149886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:24:44.978259  149886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:24:44.978342  149886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.kindnet-018985 san=[127.0.0.1 192.168.50.69 kindnet-018985 localhost minikube]
	I1212 01:24:45.047230  149886 provision.go:177] copyRemoteCerts
	I1212 01:24:45.047299  149886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:24:45.047326  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:45.050144  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.050484  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.050505  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.050683  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:45.050878  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.051027  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:45.051141  149886 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa Username:docker}
	I1212 01:24:45.138643  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 01:24:45.164436  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:24:45.189731  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1212 01:24:45.214155  149886 provision.go:87] duration metric: took 243.01949ms to configureAuth
	I1212 01:24:45.214189  149886 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:24:45.214419  149886 config.go:182] Loaded profile config "kindnet-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:24:45.214492  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:45.217344  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.217651  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.217675  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.217832  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:45.218063  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.218228  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.218419  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:45.218645  149886 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:45.218841  149886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I1212 01:24:45.218865  149886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:24:45.457858  149886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:24:45.457890  149886 main.go:141] libmachine: Checking connection to Docker...
	I1212 01:24:45.457902  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetURL
	I1212 01:24:45.459281  149886 main.go:141] libmachine: (kindnet-018985) DBG | Using libvirt version 6000000
	I1212 01:24:45.461647  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.462012  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.462043  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.462187  149886 main.go:141] libmachine: Docker is up and running!
	I1212 01:24:45.462209  149886 main.go:141] libmachine: Reticulating splines...
	I1212 01:24:45.462218  149886 client.go:171] duration metric: took 29.031665049s to LocalClient.Create
	I1212 01:24:45.462247  149886 start.go:167] duration metric: took 29.031739408s to libmachine.API.Create "kindnet-018985"
	I1212 01:24:45.462260  149886 start.go:293] postStartSetup for "kindnet-018985" (driver="kvm2")
	I1212 01:24:45.462277  149886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:24:45.462302  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:45.462553  149886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:24:45.462581  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:45.464743  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.465054  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.465082  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.465260  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:45.465463  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.465635  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:45.465757  149886 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa Username:docker}
	I1212 01:24:45.552082  149886 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:24:45.556815  149886 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:24:45.556843  149886 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:24:45.556915  149886 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:24:45.556984  149886 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:24:45.557081  149886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:24:45.568874  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:24:45.596403  149886 start.go:296] duration metric: took 134.122329ms for postStartSetup
	I1212 01:24:45.596498  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetConfigRaw
	I1212 01:24:45.597172  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetIP
	I1212 01:24:45.599944  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.600292  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.600320  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.600599  149886 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/config.json ...
	I1212 01:24:45.600790  149886 start.go:128] duration metric: took 29.190371353s to createHost
	I1212 01:24:45.600813  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:45.603224  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.603579  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.603626  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.603789  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:45.603962  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.604116  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.604255  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:45.604414  149886 main.go:141] libmachine: Using SSH client type: native
	I1212 01:24:45.604596  149886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.69 22 <nil> <nil>}
	I1212 01:24:45.604607  149886 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:24:45.712768  149886 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733966685.691420491
	
	I1212 01:24:45.712795  149886 fix.go:216] guest clock: 1733966685.691420491
	I1212 01:24:45.712811  149886 fix.go:229] Guest: 2024-12-12 01:24:45.691420491 +0000 UTC Remote: 2024-12-12 01:24:45.600802133 +0000 UTC m=+29.308803126 (delta=90.618358ms)
	I1212 01:24:45.712832  149886 fix.go:200] guest clock delta is within tolerance: 90.618358ms
	I1212 01:24:45.712837  149886 start.go:83] releasing machines lock for "kindnet-018985", held for 29.302512869s
	I1212 01:24:45.712856  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:45.713110  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetIP
	I1212 01:24:45.716101  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.716473  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.716502  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.716746  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:45.717231  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:45.717436  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:24:45.717560  149886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:24:45.717608  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:45.717624  149886 ssh_runner.go:195] Run: cat /version.json
	I1212 01:24:45.717651  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:24:45.720317  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.720510  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.720691  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.720720  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.720855  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:45.720877  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:45.720892  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:45.721071  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:24:45.721128  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.721245  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:24:45.721298  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:45.721379  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:24:45.721444  149886 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa Username:docker}
	I1212 01:24:45.721541  149886 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa Username:docker}
	I1212 01:24:45.835675  149886 ssh_runner.go:195] Run: systemctl --version
	I1212 01:24:45.842180  149886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:24:46.000317  149886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:24:46.006702  149886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:24:46.006765  149886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:24:46.023800  149886 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:24:46.023840  149886 start.go:495] detecting cgroup driver to use...
	I1212 01:24:46.023902  149886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:24:46.045544  149886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:24:46.062821  149886 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:24:46.062895  149886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:24:46.079869  149886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:24:46.096849  149886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:24:46.221998  149886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:24:46.360716  149886 docker.go:233] disabling docker service ...
	I1212 01:24:46.360781  149886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:24:46.375999  149886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:24:46.390130  149886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:24:46.539910  149886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:24:46.675719  149886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:24:46.690684  149886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:24:46.710147  149886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:24:46.710216  149886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.720920  149886 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:24:46.720992  149886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.732785  149886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.743182  149886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.754184  149886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:24:46.765913  149886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.776817  149886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.795115  149886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:24:46.806383  149886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:24:46.816333  149886 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:24:46.816402  149886 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:24:46.830581  149886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:24:46.840704  149886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:24:46.964749  149886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:24:47.060596  149886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:24:47.060661  149886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:24:47.065898  149886 start.go:563] Will wait 60s for crictl version
	I1212 01:24:47.065964  149886 ssh_runner.go:195] Run: which crictl
	I1212 01:24:47.069884  149886 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:24:47.111936  149886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:24:47.112023  149886 ssh_runner.go:195] Run: crio --version
	I1212 01:24:47.142928  149886 ssh_runner.go:195] Run: crio --version
	I1212 01:24:47.175702  149886 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:24:45.726225  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:47.726683  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:47.176886  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetIP
	I1212 01:24:47.179638  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:47.180029  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:24:47.180062  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:24:47.180297  149886 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:24:47.184647  149886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:24:47.197221  149886 kubeadm.go:883] updating cluster {Name:kindnet-018985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:kindnet-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:24:47.197364  149886 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:24:47.197440  149886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:24:47.229601  149886 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:24:47.229678  149886 ssh_runner.go:195] Run: which lz4
	I1212 01:24:47.233921  149886 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:24:47.238544  149886 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:24:47.238577  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:24:48.746888  149886 crio.go:462] duration metric: took 1.51300116s to copy over tarball
	I1212 01:24:48.746983  149886 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:24:51.045814  149886 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.298785065s)
	I1212 01:24:51.045862  149886 crio.go:469] duration metric: took 2.298937713s to extract the tarball
	I1212 01:24:51.045873  149886 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:24:51.084902  149886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:24:51.137251  149886 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:24:51.137275  149886 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:24:51.137289  149886 kubeadm.go:934] updating node { 192.168.50.69 8443 v1.31.2 crio true true} ...
	I1212 01:24:51.137416  149886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-018985 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kindnet-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1212 01:24:51.137538  149886 ssh_runner.go:195] Run: crio config
	I1212 01:24:51.190604  149886 cni.go:84] Creating CNI manager for "kindnet"
	I1212 01:24:51.190630  149886 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:24:51.190651  149886 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.69 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-018985 NodeName:kindnet-018985 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:24:51.190787  149886 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-018985"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.69"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.69"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:24:51.190852  149886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:24:51.202626  149886 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:24:51.202701  149886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:24:51.212736  149886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1212 01:24:51.231194  149886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:24:51.250242  149886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I1212 01:24:51.268206  149886 ssh_runner.go:195] Run: grep 192.168.50.69	control-plane.minikube.internal$ /etc/hosts
	I1212 01:24:51.272643  149886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.69	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:24:51.286768  149886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:24:51.413625  149886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:24:51.432711  149886 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985 for IP: 192.168.50.69
	I1212 01:24:51.432744  149886 certs.go:194] generating shared ca certs ...
	I1212 01:24:51.432792  149886 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:51.432984  149886 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:24:51.433034  149886 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:24:51.433048  149886 certs.go:256] generating profile certs ...
	I1212 01:24:51.433116  149886 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/client.key
	I1212 01:24:51.433137  149886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/client.crt with IP's: []
	I1212 01:24:51.664762  149886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/client.crt ...
	I1212 01:24:51.664801  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/client.crt: {Name:mk63e75c3b442b89681829b0849c0beea679d45a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:51.665022  149886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/client.key ...
	I1212 01:24:51.665041  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/client.key: {Name:mk1b752b392bc369fb37f56056f7e3feffdb3841 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:51.665172  149886 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.key.20fe81ea
	I1212 01:24:51.665194  149886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.crt.20fe81ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.69]
	I1212 01:24:51.821932  149886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.crt.20fe81ea ...
	I1212 01:24:51.821964  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.crt.20fe81ea: {Name:mk1a496c896a56b93d0a587c34bd64746c155586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:51.822153  149886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.key.20fe81ea ...
	I1212 01:24:51.822171  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.key.20fe81ea: {Name:mkb4febde55b42b8f02f303ab724730b86fc51f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:51.822282  149886 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.crt.20fe81ea -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.crt
	I1212 01:24:51.822415  149886 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.key.20fe81ea -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.key
	I1212 01:24:51.822500  149886 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.key
	I1212 01:24:51.822521  149886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.crt with IP's: []
	I1212 01:24:52.003664  149886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.crt ...
	I1212 01:24:52.003695  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.crt: {Name:mk667953481a6801249d7aea6d6743fbdd78f454 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:52.003888  149886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.key ...
	I1212 01:24:52.003907  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.key: {Name:mk5a2e5920705683dc8d5d540293a13f0fcd876a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:24:52.004154  149886 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:24:52.004212  149886 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:24:52.004228  149886 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:24:52.004265  149886 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:24:52.004332  149886 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:24:52.004373  149886 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:24:52.004433  149886 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:24:52.005145  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:24:52.031674  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:24:52.058322  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:24:52.083713  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:24:52.112527  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 01:24:52.136913  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:24:52.167065  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:24:52.191689  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/kindnet-018985/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:24:52.219832  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:24:52.245205  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:24:52.272021  149886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:24:52.298368  149886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:24:52.316749  149886 ssh_runner.go:195] Run: openssl version
	I1212 01:24:52.322743  149886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:24:52.334034  149886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:24:52.338863  149886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:24:52.338924  149886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:24:52.345291  149886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:24:52.357042  149886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:24:52.369877  149886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:24:52.374674  149886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:24:52.374743  149886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:24:52.381392  149886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:24:52.393191  149886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:24:52.405088  149886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:24:52.410364  149886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:24:52.410430  149886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:24:52.416528  149886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:24:52.428727  149886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:24:52.433303  149886 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:24:52.433372  149886 kubeadm.go:392] StartCluster: {Name:kindnet-018985 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:kindnet-018985 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:24:52.433492  149886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:24:52.433554  149886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:24:52.472479  149886 cri.go:89] found id: ""
	I1212 01:24:52.472574  149886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:24:52.483205  149886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:24:52.493708  149886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:24:52.503937  149886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:24:52.503962  149886 kubeadm.go:157] found existing configuration files:
	
	I1212 01:24:52.504018  149886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:24:52.513562  149886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:24:52.513638  149886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:24:52.523497  149886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:24:52.532965  149886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:24:52.533023  149886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:24:52.543119  149886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:24:52.553623  149886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:24:52.553686  149886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:24:52.563813  149886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:24:52.573950  149886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:24:52.574030  149886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:24:52.584150  149886 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:24:52.645899  149886 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:24:52.645973  149886 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:24:52.756351  149886 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:24:52.756522  149886 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:24:52.756676  149886 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:24:52.767149  149886 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:24:50.330122  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:52.724505  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:52.770133  149886 out.go:235]   - Generating certificates and keys ...
	I1212 01:24:52.770246  149886 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:24:52.770344  149886 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:24:52.852054  149886 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:24:52.989849  149886 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:24:53.251068  149886 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:24:53.455271  149886 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1212 01:24:53.652353  149886 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1212 01:24:53.652608  149886 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-018985 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	I1212 01:24:53.979634  149886 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1212 01:24:53.979825  149886 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-018985 localhost] and IPs [192.168.50.69 127.0.0.1 ::1]
	I1212 01:24:54.119262  149886 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:24:54.338126  149886 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:24:54.415735  149886 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1212 01:24:54.416063  149886 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:24:54.681716  149886 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:24:54.795902  149886 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:24:54.959458  149886 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:24:55.318591  149886 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:24:55.547216  149886 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:24:55.547911  149886 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:24:55.550345  149886 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:24:55.552485  149886 out.go:235]   - Booting up control plane ...
	I1212 01:24:55.552625  149886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:24:55.552749  149886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:24:55.552847  149886 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:24:55.567992  149886 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:24:55.574039  149886 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:24:55.574152  149886 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:24:55.702752  149886 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:24:55.702937  149886 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:24:54.725391  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:57.225357  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:24:56.703737  149886 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001518464s
	I1212 01:24:56.703873  149886 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:25:01.702954  149886 kubeadm.go:310] [api-check] The API server is healthy after 5.001444041s
	I1212 01:25:01.728646  149886 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:25:01.748048  149886 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:25:01.776816  149886 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:25:01.777098  149886 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-018985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:25:01.792692  149886 kubeadm.go:310] [bootstrap-token] Using token: 32i106.qdq735g1wrnegf4g
	I1212 01:25:01.794329  149886 out.go:235]   - Configuring RBAC rules ...
	I1212 01:25:01.794482  149886 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:25:01.803173  149886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:25:01.811787  149886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:25:01.815320  149886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:25:01.818458  149886 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:25:01.829086  149886 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:25:02.117108  149886 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:25:02.558338  149886 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:25:03.115910  149886 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:25:03.117070  149886 kubeadm.go:310] 
	I1212 01:25:03.117171  149886 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:25:03.117186  149886 kubeadm.go:310] 
	I1212 01:25:03.117308  149886 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:25:03.117325  149886 kubeadm.go:310] 
	I1212 01:25:03.117360  149886 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:25:03.117469  149886 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:25:03.117547  149886 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:25:03.117558  149886 kubeadm.go:310] 
	I1212 01:25:03.117627  149886 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:25:03.117638  149886 kubeadm.go:310] 
	I1212 01:25:03.117704  149886 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:25:03.117713  149886 kubeadm.go:310] 
	I1212 01:25:03.117792  149886 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:25:03.117891  149886 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:25:03.117987  149886 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:25:03.118001  149886 kubeadm.go:310] 
	I1212 01:25:03.118131  149886 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:25:03.118235  149886 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:25:03.118248  149886 kubeadm.go:310] 
	I1212 01:25:03.118371  149886 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 32i106.qdq735g1wrnegf4g \
	I1212 01:25:03.118535  149886 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:25:03.118567  149886 kubeadm.go:310] 	--control-plane 
	I1212 01:25:03.118576  149886 kubeadm.go:310] 
	I1212 01:25:03.118703  149886 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:25:03.118712  149886 kubeadm.go:310] 
	I1212 01:25:03.118835  149886 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 32i106.qdq735g1wrnegf4g \
	I1212 01:25:03.119000  149886 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:25:03.119344  149886 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:25:03.119405  149886 cni.go:84] Creating CNI manager for "kindnet"
	I1212 01:25:03.121729  149886 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1212 01:24:59.225453  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:25:01.726404  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:25:03.122951  149886 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 01:25:03.129306  149886 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1212 01:25:03.129325  149886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 01:25:03.147699  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 01:25:03.433566  149886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:25:03.433688  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:03.433710  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-018985 minikube.k8s.io/updated_at=2024_12_12T01_25_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=kindnet-018985 minikube.k8s.io/primary=true
	I1212 01:25:03.616200  149886 ops.go:34] apiserver oom_adj: -16
	I1212 01:25:03.616262  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:04.116993  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:04.616633  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:05.116628  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:05.616832  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:06.116971  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:06.617076  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:07.116326  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:07.616371  149886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:25:07.727441  149886 kubeadm.go:1113] duration metric: took 4.293813668s to wait for elevateKubeSystemPrivileges
	I1212 01:25:07.727483  149886 kubeadm.go:394] duration metric: took 15.294115175s to StartCluster
	I1212 01:25:07.727509  149886 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:07.727614  149886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:25:07.729828  149886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:25:07.730087  149886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 01:25:07.730114  149886 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.69 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:25:07.730162  149886 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:25:07.730269  149886 addons.go:69] Setting storage-provisioner=true in profile "kindnet-018985"
	I1212 01:25:07.730289  149886 addons.go:234] Setting addon storage-provisioner=true in "kindnet-018985"
	I1212 01:25:07.730324  149886 host.go:66] Checking if "kindnet-018985" exists ...
	I1212 01:25:07.730324  149886 config.go:182] Loaded profile config "kindnet-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:25:07.730345  149886 addons.go:69] Setting default-storageclass=true in profile "kindnet-018985"
	I1212 01:25:07.730365  149886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-018985"
	I1212 01:25:07.730742  149886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:25:07.730786  149886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:25:07.730865  149886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:25:07.730906  149886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:25:07.731761  149886 out.go:177] * Verifying Kubernetes components...
	I1212 01:25:07.733141  149886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:25:07.746460  149886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I1212 01:25:07.746462  149886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37781
	I1212 01:25:07.746994  149886 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:25:07.747022  149886 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:25:07.747519  149886 main.go:141] libmachine: Using API Version  1
	I1212 01:25:07.747540  149886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:25:07.747677  149886 main.go:141] libmachine: Using API Version  1
	I1212 01:25:07.747700  149886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:25:07.747905  149886 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:25:07.748059  149886 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:25:07.748249  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetState
	I1212 01:25:07.748418  149886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:25:07.748464  149886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:25:07.751216  149886 addons.go:234] Setting addon default-storageclass=true in "kindnet-018985"
	I1212 01:25:07.751249  149886 host.go:66] Checking if "kindnet-018985" exists ...
	I1212 01:25:07.751517  149886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:25:07.751545  149886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:25:07.764880  149886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41031
	I1212 01:25:07.765340  149886 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:25:07.765677  149886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33673
	I1212 01:25:07.765848  149886 main.go:141] libmachine: Using API Version  1
	I1212 01:25:07.765872  149886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:25:07.766131  149886 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:25:07.766206  149886 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:25:07.766341  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetState
	I1212 01:25:07.767008  149886 main.go:141] libmachine: Using API Version  1
	I1212 01:25:07.767031  149886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:25:07.767683  149886 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:25:07.768413  149886 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:25:07.768456  149886 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:25:07.768687  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:25:07.770804  149886 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:25:07.772170  149886 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:25:07.772188  149886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:25:07.772203  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:25:07.775155  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:25:07.775651  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:25:07.775676  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:25:07.775764  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:25:07.775948  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:25:07.776123  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:25:07.776274  149886 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa Username:docker}
	I1212 01:25:07.785318  149886 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I1212 01:25:07.785882  149886 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:25:07.786297  149886 main.go:141] libmachine: Using API Version  1
	I1212 01:25:07.786313  149886 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:25:07.786597  149886 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:25:07.786707  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetState
	I1212 01:25:07.788390  149886 main.go:141] libmachine: (kindnet-018985) Calling .DriverName
	I1212 01:25:07.788565  149886 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:25:07.788578  149886 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:25:07.788590  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHHostname
	I1212 01:25:07.791172  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:25:07.791675  149886 main.go:141] libmachine: (kindnet-018985) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:05:69", ip: ""} in network mk-kindnet-018985: {Iface:virbr2 ExpiryTime:2024-12-12 02:24:32 +0000 UTC Type:0 Mac:52:54:00:45:05:69 Iaid: IPaddr:192.168.50.69 Prefix:24 Hostname:kindnet-018985 Clientid:01:52:54:00:45:05:69}
	I1212 01:25:07.791697  149886 main.go:141] libmachine: (kindnet-018985) DBG | domain kindnet-018985 has defined IP address 192.168.50.69 and MAC address 52:54:00:45:05:69 in network mk-kindnet-018985
	I1212 01:25:07.791839  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHPort
	I1212 01:25:07.791983  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHKeyPath
	I1212 01:25:07.792087  149886 main.go:141] libmachine: (kindnet-018985) Calling .GetSSHUsername
	I1212 01:25:07.792240  149886 sshutil.go:53] new ssh client: &{IP:192.168.50.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/kindnet-018985/id_rsa Username:docker}
	I1212 01:25:08.005569  149886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:25:08.005708  149886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 01:25:08.026718  149886 node_ready.go:35] waiting up to 15m0s for node "kindnet-018985" to be "Ready" ...
	I1212 01:25:08.156948  149886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:25:08.247396  149886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:25:08.501610  149886 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1212 01:25:08.501729  149886 main.go:141] libmachine: Making call to close driver server
	I1212 01:25:08.501745  149886 main.go:141] libmachine: (kindnet-018985) Calling .Close
	I1212 01:25:08.502027  149886 main.go:141] libmachine: (kindnet-018985) DBG | Closing plugin on server side
	I1212 01:25:08.502075  149886 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:25:08.502087  149886 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:25:08.502099  149886 main.go:141] libmachine: Making call to close driver server
	I1212 01:25:08.502109  149886 main.go:141] libmachine: (kindnet-018985) Calling .Close
	I1212 01:25:08.502359  149886 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:25:08.502375  149886 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:25:08.516324  149886 main.go:141] libmachine: Making call to close driver server
	I1212 01:25:08.516365  149886 main.go:141] libmachine: (kindnet-018985) Calling .Close
	I1212 01:25:08.516672  149886 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:25:08.516697  149886 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:25:08.516706  149886 main.go:141] libmachine: (kindnet-018985) DBG | Closing plugin on server side
	I1212 01:25:08.778927  149886 main.go:141] libmachine: Making call to close driver server
	I1212 01:25:08.779020  149886 main.go:141] libmachine: (kindnet-018985) Calling .Close
	I1212 01:25:08.779341  149886 main.go:141] libmachine: (kindnet-018985) DBG | Closing plugin on server side
	I1212 01:25:08.779366  149886 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:25:08.779382  149886 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:25:08.779399  149886 main.go:141] libmachine: Making call to close driver server
	I1212 01:25:08.779409  149886 main.go:141] libmachine: (kindnet-018985) Calling .Close
	I1212 01:25:08.779644  149886 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:25:08.779664  149886 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:25:08.779675  149886 main.go:141] libmachine: (kindnet-018985) DBG | Closing plugin on server side
	I1212 01:25:08.782580  149886 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1212 01:25:04.226026  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:25:06.726552  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:25:08.784057  149886 addons.go:510] duration metric: took 1.053908236s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1212 01:25:09.006123  149886 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-018985" context rescaled to 1 replicas
	I1212 01:25:10.031433  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:09.224110  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:25:11.225385  149302 pod_ready.go:103] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"False"
	I1212 01:25:12.225261  149302 pod_ready.go:93] pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:12.225286  149302 pod_ready.go:82] duration metric: took 35.50691137s for pod "coredns-7c65d6cfc9-4bkt5" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.225297  149302 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-jvcpv" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.227341  149302 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-jvcpv" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jvcpv" not found
	I1212 01:25:12.227371  149302 pod_ready.go:82] duration metric: took 2.066697ms for pod "coredns-7c65d6cfc9-jvcpv" in "kube-system" namespace to be "Ready" ...
	E1212 01:25:12.227383  149302 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-jvcpv" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-jvcpv" not found
	I1212 01:25:12.227393  149302 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.236926  149302 pod_ready.go:93] pod "etcd-auto-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:12.236947  149302 pod_ready.go:82] duration metric: took 9.546694ms for pod "etcd-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.236956  149302 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.241684  149302 pod_ready.go:93] pod "kube-apiserver-auto-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:12.241705  149302 pod_ready.go:82] duration metric: took 4.742875ms for pod "kube-apiserver-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.241717  149302 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.246146  149302 pod_ready.go:93] pod "kube-controller-manager-auto-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:12.246170  149302 pod_ready.go:82] duration metric: took 4.445305ms for pod "kube-controller-manager-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.246182  149302 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-wgjtc" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.423092  149302 pod_ready.go:93] pod "kube-proxy-wgjtc" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:12.423118  149302 pod_ready.go:82] duration metric: took 176.928737ms for pod "kube-proxy-wgjtc" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.423129  149302 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.821973  149302 pod_ready.go:93] pod "kube-scheduler-auto-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:12.822000  149302 pod_ready.go:82] duration metric: took 398.865246ms for pod "kube-scheduler-auto-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:12.822009  149302 pod_ready.go:39] duration metric: took 36.114038625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:25:12.822025  149302 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:25:12.822096  149302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:12.838200  149302 api_server.go:72] duration metric: took 36.445747794s to wait for apiserver process to appear ...
	I1212 01:25:12.838227  149302 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:25:12.838250  149302 api_server.go:253] Checking apiserver healthz at https://192.168.61.183:8443/healthz ...
	I1212 01:25:12.843137  149302 api_server.go:279] https://192.168.61.183:8443/healthz returned 200:
	ok
	I1212 01:25:12.844273  149302 api_server.go:141] control plane version: v1.31.2
	I1212 01:25:12.844299  149302 api_server.go:131] duration metric: took 6.065023ms to wait for apiserver health ...
	I1212 01:25:12.844310  149302 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:25:13.027053  149302 system_pods.go:59] 7 kube-system pods found
	I1212 01:25:13.027092  149302 system_pods.go:61] "coredns-7c65d6cfc9-4bkt5" [32525d62-c544-4ab3-99e0-4750bcb91e81] Running
	I1212 01:25:13.027098  149302 system_pods.go:61] "etcd-auto-018985" [e5709a1e-543a-440f-9060-c7d8f971b9a3] Running
	I1212 01:25:13.027105  149302 system_pods.go:61] "kube-apiserver-auto-018985" [f2a6f6ff-b7ec-4dba-bed6-25ec1ec8bdae] Running
	I1212 01:25:13.027111  149302 system_pods.go:61] "kube-controller-manager-auto-018985" [71395130-46e4-4b61-9859-98ebd2f7f41b] Running
	I1212 01:25:13.027116  149302 system_pods.go:61] "kube-proxy-wgjtc" [50e01d6c-1c75-4fde-89cd-b99233bc1c63] Running
	I1212 01:25:13.027120  149302 system_pods.go:61] "kube-scheduler-auto-018985" [9b3108c6-fa01-4e66-97f6-b7b094c7116e] Running
	I1212 01:25:13.027126  149302 system_pods.go:61] "storage-provisioner" [e0aa48dd-aeb8-4dcc-a499-c4b93a09ff68] Running
	I1212 01:25:13.027133  149302 system_pods.go:74] duration metric: took 182.816043ms to wait for pod list to return data ...
	I1212 01:25:13.027144  149302 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:25:13.223312  149302 default_sa.go:45] found service account: "default"
	I1212 01:25:13.223343  149302 default_sa.go:55] duration metric: took 196.192769ms for default service account to be created ...
	I1212 01:25:13.223353  149302 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:25:13.426585  149302 system_pods.go:86] 7 kube-system pods found
	I1212 01:25:13.426625  149302 system_pods.go:89] "coredns-7c65d6cfc9-4bkt5" [32525d62-c544-4ab3-99e0-4750bcb91e81] Running
	I1212 01:25:13.426634  149302 system_pods.go:89] "etcd-auto-018985" [e5709a1e-543a-440f-9060-c7d8f971b9a3] Running
	I1212 01:25:13.426642  149302 system_pods.go:89] "kube-apiserver-auto-018985" [f2a6f6ff-b7ec-4dba-bed6-25ec1ec8bdae] Running
	I1212 01:25:13.426651  149302 system_pods.go:89] "kube-controller-manager-auto-018985" [71395130-46e4-4b61-9859-98ebd2f7f41b] Running
	I1212 01:25:13.426656  149302 system_pods.go:89] "kube-proxy-wgjtc" [50e01d6c-1c75-4fde-89cd-b99233bc1c63] Running
	I1212 01:25:13.426661  149302 system_pods.go:89] "kube-scheduler-auto-018985" [9b3108c6-fa01-4e66-97f6-b7b094c7116e] Running
	I1212 01:25:13.426667  149302 system_pods.go:89] "storage-provisioner" [e0aa48dd-aeb8-4dcc-a499-c4b93a09ff68] Running
	I1212 01:25:13.426675  149302 system_pods.go:126] duration metric: took 203.314792ms to wait for k8s-apps to be running ...
	I1212 01:25:13.426687  149302 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:25:13.426742  149302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:25:13.445836  149302 system_svc.go:56] duration metric: took 19.137428ms WaitForService to wait for kubelet
	I1212 01:25:13.445868  149302 kubeadm.go:582] duration metric: took 37.053422147s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:25:13.445888  149302 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:25:13.622597  149302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:25:13.622627  149302 node_conditions.go:123] node cpu capacity is 2
	I1212 01:25:13.622639  149302 node_conditions.go:105] duration metric: took 176.745954ms to run NodePressure ...
	I1212 01:25:13.622651  149302 start.go:241] waiting for startup goroutines ...
	I1212 01:25:13.622657  149302 start.go:246] waiting for cluster config update ...
	I1212 01:25:13.622667  149302 start.go:255] writing updated cluster config ...
	I1212 01:25:13.622996  149302 ssh_runner.go:195] Run: rm -f paused
	I1212 01:25:13.671473  149302 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:25:13.674595  149302 out.go:177] * Done! kubectl is now configured to use "auto-018985" cluster and "default" namespace by default
	I1212 01:25:12.530882  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:15.030394  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:17.031088  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:19.530404  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:22.030181  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:24.031057  149886 node_ready.go:53] node "kindnet-018985" has status "Ready":"False"
	I1212 01:25:25.531100  149886 node_ready.go:49] node "kindnet-018985" has status "Ready":"True"
	I1212 01:25:25.531147  149886 node_ready.go:38] duration metric: took 17.504375322s for node "kindnet-018985" to be "Ready" ...
	I1212 01:25:25.531162  149886 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:25:25.541508  149886 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-qph5p" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.047932  149886 pod_ready.go:93] pod "coredns-7c65d6cfc9-qph5p" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:27.047956  149886 pod_ready.go:82] duration metric: took 1.506414723s for pod "coredns-7c65d6cfc9-qph5p" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.047966  149886 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.051954  149886 pod_ready.go:93] pod "etcd-kindnet-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:27.051973  149886 pod_ready.go:82] duration metric: took 4.001692ms for pod "etcd-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.051983  149886 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.056255  149886 pod_ready.go:93] pod "kube-apiserver-kindnet-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:27.056274  149886 pod_ready.go:82] duration metric: took 4.285485ms for pod "kube-apiserver-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.056284  149886 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.060320  149886 pod_ready.go:93] pod "kube-controller-manager-kindnet-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:27.060337  149886 pod_ready.go:82] duration metric: took 4.045462ms for pod "kube-controller-manager-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.060345  149886 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-rkd5m" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.131382  149886 pod_ready.go:93] pod "kube-proxy-rkd5m" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:27.131409  149886 pod_ready.go:82] duration metric: took 71.0573ms for pod "kube-proxy-rkd5m" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.131422  149886 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.530531  149886 pod_ready.go:93] pod "kube-scheduler-kindnet-018985" in "kube-system" namespace has status "Ready":"True"
	I1212 01:25:27.530562  149886 pod_ready.go:82] duration metric: took 399.131249ms for pod "kube-scheduler-kindnet-018985" in "kube-system" namespace to be "Ready" ...
	I1212 01:25:27.530573  149886 pod_ready.go:39] duration metric: took 1.999390595s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:25:27.530592  149886 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:25:27.530641  149886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:25:27.545878  149886 api_server.go:72] duration metric: took 19.815725508s to wait for apiserver process to appear ...
	I1212 01:25:27.545909  149886 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:25:27.545930  149886 api_server.go:253] Checking apiserver healthz at https://192.168.50.69:8443/healthz ...
	I1212 01:25:27.551325  149886 api_server.go:279] https://192.168.50.69:8443/healthz returned 200:
	ok
	I1212 01:25:27.552212  149886 api_server.go:141] control plane version: v1.31.2
	I1212 01:25:27.552234  149886 api_server.go:131] duration metric: took 6.31842ms to wait for apiserver health ...
	I1212 01:25:27.552243  149886 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:25:27.733552  149886 system_pods.go:59] 8 kube-system pods found
	I1212 01:25:27.733584  149886 system_pods.go:61] "coredns-7c65d6cfc9-qph5p" [f320ffa6-89ea-4461-84d0-d3445c16e16e] Running
	I1212 01:25:27.733589  149886 system_pods.go:61] "etcd-kindnet-018985" [922a9559-3017-419f-a6ee-55198867525e] Running
	I1212 01:25:27.733592  149886 system_pods.go:61] "kindnet-d8jgl" [876b8084-de22-4c87-b790-4f4b4a3d6f8e] Running
	I1212 01:25:27.733596  149886 system_pods.go:61] "kube-apiserver-kindnet-018985" [ef2f5574-d448-4e39-b7d7-a46e4e2a5541] Running
	I1212 01:25:27.733599  149886 system_pods.go:61] "kube-controller-manager-kindnet-018985" [420103e9-f5a9-44c6-9cea-958f013ef289] Running
	I1212 01:25:27.733602  149886 system_pods.go:61] "kube-proxy-rkd5m" [9104894b-ef6a-48db-9acb-1881bbf1200d] Running
	I1212 01:25:27.733605  149886 system_pods.go:61] "kube-scheduler-kindnet-018985" [c08ca924-d253-4c1d-b083-5244136fa687] Running
	I1212 01:25:27.733608  149886 system_pods.go:61] "storage-provisioner" [cce0d735-c942-4fde-8e79-8b4bd74e5ac8] Running
	I1212 01:25:27.733613  149886 system_pods.go:74] duration metric: took 181.364306ms to wait for pod list to return data ...
	I1212 01:25:27.733621  149886 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:25:27.931790  149886 default_sa.go:45] found service account: "default"
	I1212 01:25:27.931815  149886 default_sa.go:55] duration metric: took 198.188262ms for default service account to be created ...
	I1212 01:25:27.931826  149886 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:25:28.134227  149886 system_pods.go:86] 8 kube-system pods found
	I1212 01:25:28.134259  149886 system_pods.go:89] "coredns-7c65d6cfc9-qph5p" [f320ffa6-89ea-4461-84d0-d3445c16e16e] Running
	I1212 01:25:28.134268  149886 system_pods.go:89] "etcd-kindnet-018985" [922a9559-3017-419f-a6ee-55198867525e] Running
	I1212 01:25:28.134274  149886 system_pods.go:89] "kindnet-d8jgl" [876b8084-de22-4c87-b790-4f4b4a3d6f8e] Running
	I1212 01:25:28.134280  149886 system_pods.go:89] "kube-apiserver-kindnet-018985" [ef2f5574-d448-4e39-b7d7-a46e4e2a5541] Running
	I1212 01:25:28.134286  149886 system_pods.go:89] "kube-controller-manager-kindnet-018985" [420103e9-f5a9-44c6-9cea-958f013ef289] Running
	I1212 01:25:28.134291  149886 system_pods.go:89] "kube-proxy-rkd5m" [9104894b-ef6a-48db-9acb-1881bbf1200d] Running
	I1212 01:25:28.134297  149886 system_pods.go:89] "kube-scheduler-kindnet-018985" [c08ca924-d253-4c1d-b083-5244136fa687] Running
	I1212 01:25:28.134300  149886 system_pods.go:89] "storage-provisioner" [cce0d735-c942-4fde-8e79-8b4bd74e5ac8] Running
	I1212 01:25:28.134308  149886 system_pods.go:126] duration metric: took 202.476265ms to wait for k8s-apps to be running ...
	I1212 01:25:28.134319  149886 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:25:28.134373  149886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:25:28.149612  149886 system_svc.go:56] duration metric: took 15.280845ms WaitForService to wait for kubelet
	I1212 01:25:28.149648  149886 kubeadm.go:582] duration metric: took 20.4195025s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:25:28.149667  149886 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:25:28.331268  149886 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:25:28.331293  149886 node_conditions.go:123] node cpu capacity is 2
	I1212 01:25:28.331304  149886 node_conditions.go:105] duration metric: took 181.632618ms to run NodePressure ...
	I1212 01:25:28.331315  149886 start.go:241] waiting for startup goroutines ...
	I1212 01:25:28.331321  149886 start.go:246] waiting for cluster config update ...
	I1212 01:25:28.331337  149886 start.go:255] writing updated cluster config ...
	I1212 01:25:28.331619  149886 ssh_runner.go:195] Run: rm -f paused
	I1212 01:25:28.380324  149886 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:25:28.382338  149886 out.go:177] * Done! kubectl is now configured to use "kindnet-018985" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.340030735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966733340000234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e5213f9-3e5f-4fbf-8306-4394ccbe5a8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.340731293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfbdcf28-1dc7-48c1-98c9-162630a90bd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.340802974Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfbdcf28-1dc7-48c1-98c9-162630a90bd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.340990621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfbdcf28-1dc7-48c1-98c9-162630a90bd1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.396224576Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6bdfe7f9-4364-4966-a438-c06cd44bb27b name=/runtime.v1.RuntimeService/Version
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.396303045Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bdfe7f9-4364-4966-a438-c06cd44bb27b name=/runtime.v1.RuntimeService/Version
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.397503622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7887f43-d2f1-416e-a279-298b9079de4c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.398149184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966733398125869,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7887f43-d2f1-416e-a279-298b9079de4c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.398976086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68987ed6-665f-4bb0-bc9e-ef102edbc521 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.399027636Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68987ed6-665f-4bb0-bc9e-ef102edbc521 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.399209576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68987ed6-665f-4bb0-bc9e-ef102edbc521 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.445237854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e6cb3fa-398a-44e7-82b1-55cc94683561 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.445310825Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e6cb3fa-398a-44e7-82b1-55cc94683561 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.446783641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=146b0421-e3f9-49a1-af41-bdd3a93c6a1a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.447443744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966733447417587,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=146b0421-e3f9-49a1-af41-bdd3a93c6a1a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.451122654Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d9028ad-0ba2-4fb8-b8ac-7274f4a0308d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.451201261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d9028ad-0ba2-4fb8-b8ac-7274f4a0308d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.457749219Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d9028ad-0ba2-4fb8-b8ac-7274f4a0308d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.502817973Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=947a71b9-eb45-42c8-b0b7-e8f73323c7a7 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.502948704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=947a71b9-eb45-42c8-b0b7-e8f73323c7a7 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.504388008Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=997b5251-1818-46dc-bc1e-18992cdff228 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.504864933Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966733504837052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=997b5251-1818-46dc-bc1e-18992cdff228 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.505347210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c96041a4-76c3-4cec-a373-d602bb1ca52d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.505400725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c96041a4-76c3-4cec-a373-d602bb1ca52d name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:25:33 default-k8s-diff-port-076578 crio[719]: time="2024-12-12 01:25:33.505693833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b,PodSandboxId:e4798bc9a1216ac9764418f919a7d3d0dfb284bd64982bb1d3e29f8fae5dcc24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965709682228667,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b67b42bd-ae67-4446-99ec-451650bd8c11,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c,PodSandboxId:f91d6be142c4351cb052e93b5c455bb5dda2f8cc390fce1633220efc73cc7c60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709300176494,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-9plj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e559d2-f6ac-4c21-b344-96266b6d3622,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687,PodSandboxId:1bdfcd11dd3d47672ca53c9f678dacaff18a497c579c5318e98068c214de57b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965709232992554,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v6j4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 710be306-064a-4506-9649-51853913362d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f,PodSandboxId:9dba36f674d90be7b1ab32c5db9c5912f89e1660149990f228cbb6208508102c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING
,CreatedAt:1733965708628410049,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gd2mq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db6293f3-649a-4a96-8e4c-1028fa12b909,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8,PodSandboxId:14d2abd6ec1160be4cf36411aaa6aba795cee342bf1c35a586b60fe438d6d98e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:173396569
7616285162,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 690c5f76db609ba51d9a49e22a7df9a0,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b,PodSandboxId:4f3d6154fbb9c7b1872c0fcde032a20b995399b301268b3559beeb13a76b8be9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,Cre
atedAt:1733965697582160963,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2,PodSandboxId:991f61d8b6e80d40aaf7dee4e486410b764491b682700b42a84f0e8991ad0b25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,Creat
edAt:1733965697538533648,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a27a263f04266c589c6bd4f43bb0aeb,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce,PodSandboxId:437f62643121bee8d13085d59d55156fd7de14234d8fd88ccfaff1d618b5ede7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965
697493100475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ce87c2085fb5c3bde2b06ed071f751cf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc,PodSandboxId:3b64060c9a03c19bc4b725ec8115a02dcbb27ecc17df9dd4d6b27592a738cf51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965407843174679,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-076578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ce7d17b0782a3ea18ada8b7d1d2020,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c96041a4-76c3-4cec-a373-d602bb1ca52d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3f05fdc2ca6db       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   e4798bc9a1216       storage-provisioner
	6e99deb43ee24       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   f91d6be142c43       coredns-7c65d6cfc9-9plj4
	d8f7f6160124c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   1bdfcd11dd3d4       coredns-7c65d6cfc9-v6j4v
	0f169e7b4faa3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   17 minutes ago      Running             kube-proxy                0                   9dba36f674d90       kube-proxy-gd2mq
	a098ed9ecb9bc       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   17 minutes ago      Running             kube-controller-manager   2                   14d2abd6ec116       kube-controller-manager-default-k8s-diff-port-076578
	24aff75b31aff       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   17 minutes ago      Running             kube-apiserver            2                   4f3d6154fbb9c       kube-apiserver-default-k8s-diff-port-076578
	a00c8fc8fb2fe       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   17 minutes ago      Running             kube-scheduler            2                   991f61d8b6e80       kube-scheduler-default-k8s-diff-port-076578
	e04e272572be4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   437f62643121b       etcd-default-k8s-diff-port-076578
	c058c57f9ad2b       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   22 minutes ago      Exited              kube-apiserver            1                   3b64060c9a03c       kube-apiserver-default-k8s-diff-port-076578
	
	
	==> coredns [6e99deb43ee24f395408a478d815db75a766483cbc97e0e5aa00187776089d4c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [d8f7f6160124c675818ad6fc4efdff4f8d33690d64f6dd7c3dd9987c6a3b2687] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-076578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-076578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=default-k8s-diff-port-076578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 01:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-076578
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 01:25:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 01:23:52 +0000   Thu, 12 Dec 2024 01:08:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 01:23:52 +0000   Thu, 12 Dec 2024 01:08:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 01:23:52 +0000   Thu, 12 Dec 2024 01:08:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 01:23:52 +0000   Thu, 12 Dec 2024 01:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    default-k8s-diff-port-076578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 69353d120eeb468b849268b0c7842c67
	  System UUID:                69353d12-0eeb-468b-8492-68b0c7842c67
	  Boot ID:                    5ca6dcf2-3db9-4538-97c0-226455ab2231
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9plj4                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-v6j4v                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-076578                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-076578             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-076578    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-gd2mq                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-076578             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-dkmwp                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node default-k8s-diff-port-076578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node default-k8s-diff-port-076578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node default-k8s-diff-port-076578 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node default-k8s-diff-port-076578 event: Registered Node default-k8s-diff-port-076578 in Controller
	
	
	==> dmesg <==
	[  +0.052764] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.049439] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.091247] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.773392] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.658714] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.655946] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.063145] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069831] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.181018] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.149050] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +0.331077] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[  +4.423348] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.062640] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.115929] systemd-fstab-generator[922]: Ignoring "noauto" option for root device
	[  +5.587633] kauditd_printk_skb: 97 callbacks suppressed
	[  +9.656463] kauditd_printk_skb: 85 callbacks suppressed
	[Dec12 01:08] systemd-fstab-generator[2593]: Ignoring "noauto" option for root device
	[  +0.076525] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.506074] systemd-fstab-generator[2910]: Ignoring "noauto" option for root device
	[  +0.079835] kauditd_printk_skb: 54 callbacks suppressed
	[  +4.877576] systemd-fstab-generator[3025]: Ignoring "noauto" option for root device
	[  +0.827709] kauditd_printk_skb: 34 callbacks suppressed
	[Dec12 01:09] kauditd_printk_skb: 62 callbacks suppressed
	
	
	==> etcd [e04e272572be44c8c7e95b23d8947ecf46b02d81287a663c9ea31c2ca83bc2ce] <==
	{"level":"info","ts":"2024-12-12T01:08:18.726106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.726195Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.726235Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:08:18.726197Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:08:18.727014Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:08:18.727081Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:08:18.729510Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-12T01:08:18.729623Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-12-12T01:08:18.727369Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T01:08:18.733262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-12T01:18:18.764405Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-12-12T01:18:18.773069Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"8.279445ms","hash":2778023139,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2174976,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-12-12T01:18:18.773132Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2778023139,"revision":685,"compact-revision":-1}
	{"level":"info","ts":"2024-12-12T01:23:18.772716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":927}
	{"level":"info","ts":"2024-12-12T01:23:18.776742Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":927,"took":"3.318391ms","hash":3814156995,"current-db-size-bytes":2174976,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1519616,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-12-12T01:23:18.776824Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3814156995,"revision":927,"compact-revision":685}
	{"level":"info","ts":"2024-12-12T01:23:44.679332Z","caller":"traceutil/trace.go:171","msg":"trace[96086073] transaction","detail":"{read_only:false; response_revision:1194; number_of_response:1; }","duration":"128.953833ms","start":"2024-12-12T01:23:44.550349Z","end":"2024-12-12T01:23:44.679303Z","steps":["trace[96086073] 'process raft request'  (duration: 128.850055ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-12T01:24:51.517375Z","caller":"traceutil/trace.go:171","msg":"trace[623994580] linearizableReadLoop","detail":"{readStateIndex:1462; appliedIndex:1461; }","duration":"143.450603ms","start":"2024-12-12T01:24:51.373890Z","end":"2024-12-12T01:24:51.517341Z","steps":["trace[623994580] 'read index received'  (duration: 143.248416ms)","trace[623994580] 'applied index is now lower than readState.Index'  (duration: 201.529µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-12T01:24:51.517820Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"143.794788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-12T01:24:51.517870Z","caller":"traceutil/trace.go:171","msg":"trace[1093269769] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:1249; }","duration":"143.975565ms","start":"2024-12-12T01:24:51.373884Z","end":"2024-12-12T01:24:51.517860Z","steps":["trace[1093269769] 'agreement among raft nodes before linearized reading'  (duration: 143.764871ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-12T01:24:51.518108Z","caller":"traceutil/trace.go:171","msg":"trace[247978399] transaction","detail":"{read_only:false; response_revision:1249; number_of_response:1; }","duration":"255.655333ms","start":"2024-12-12T01:24:51.262444Z","end":"2024-12-12T01:24:51.518099Z","steps":["trace[247978399] 'process raft request'  (duration: 254.750848ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-12T01:24:52.193623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.12361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-12-12T01:24:52.193776Z","caller":"traceutil/trace.go:171","msg":"trace[109727648] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:1250; }","duration":"118.351212ms","start":"2024-12-12T01:24:52.075410Z","end":"2024-12-12T01:24:52.193762Z","steps":["trace[109727648] 'range keys from in-memory index tree'  (duration: 118.002867ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-12T01:24:52.193720Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.928203ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-12T01:24:52.194069Z","caller":"traceutil/trace.go:171","msg":"trace[720778893] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1250; }","duration":"102.301967ms","start":"2024-12-12T01:24:52.091759Z","end":"2024-12-12T01:24:52.194061Z","steps":["trace[720778893] 'range keys from in-memory index tree'  (duration: 101.918361ms)"],"step_count":1}
	
	
	==> kernel <==
	 01:25:33 up 22 min,  0 users,  load average: 0.27, 0.17, 0.14
	Linux default-k8s-diff-port-076578 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [24aff75b31aff6af6f75aeb849dbfc956e0bb21a729fa15b5f1ddf66f0bda81b] <==
	I1212 01:21:21.222311       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:21:21.222379       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:23:20.220682       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:23:20.220856       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1212 01:23:21.222698       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:23:21.222796       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:23:21.222962       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:23:21.223125       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:23:21.223951       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:23:21.225084       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:24:21.224219       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:24:21.224312       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1212 01:24:21.225439       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:24:21.225635       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:24:21.225747       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:24:21.226824       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [c058c57f9ad2b83650e14c223407406ec6f5229b179450a859d12b5ded01e6cc] <==
	W1212 01:08:13.630199       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.656217       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.722864       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.753254       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.783139       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:13.925261       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.014789       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.022222       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.064303       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.064516       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.085376       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.195926       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.284050       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.308173       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.351349       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.359796       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.363095       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.373830       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.451828       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.474711       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.517698       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.616224       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.679886       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.682363       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:14.692877       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a098ed9ecb9bc87abf85477265a9e5c29a1d0be179d49dfeed7e03b548c2a7c8] <==
	E1212 01:20:27.339071       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:20:27.791201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:20:57.347500       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:20:57.804231       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:21:27.356118       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:21:27.813290       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:21:57.362292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:21:57.821325       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:22:27.368745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:22:27.829160       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:22:57.375639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:22:57.836544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:23:27.383108       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:23:27.847464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:23:52.649127       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-076578"
	E1212 01:23:57.389153       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:23:57.855785       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:24:27.398516       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:24:27.868197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:24:27.952510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="285.131µs"
	I1212 01:24:39.941307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="78.331µs"
	E1212 01:24:57.407267       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:24:57.877213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:25:27.413846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:25:27.885481       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0f169e7b4faa3f785377ec661e8ee7af2a97dfa4e23989b59d4ad658224cda5f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1212 01:08:29.370504       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1212 01:08:29.446289       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.174"]
	E1212 01:08:29.446416       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:08:29.667045       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 01:08:29.667148       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:08:29.667333       1 server_linux.go:169] "Using iptables Proxier"
	I1212 01:08:29.683791       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:08:29.684064       1 server.go:483] "Version info" version="v1.31.2"
	I1212 01:08:29.684075       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:08:29.689427       1 config.go:199] "Starting service config controller"
	I1212 01:08:29.689440       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1212 01:08:29.689470       1 config.go:105] "Starting endpoint slice config controller"
	I1212 01:08:29.689475       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1212 01:08:29.690876       1 config.go:328] "Starting node config controller"
	I1212 01:08:29.690897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1212 01:08:29.789653       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1212 01:08:29.789717       1 shared_informer.go:320] Caches are synced for service config
	I1212 01:08:29.791077       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a00c8fc8fb2fefe37b9cb01821ccda0558c2292a85e0e4040c1de86f670bbaa2] <==
	W1212 01:08:20.221947       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 01:08:20.222350       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1212 01:08:20.221230       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 01:08:20.222417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:20.223040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 01:08:20.223156       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:20.227238       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:20.227346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.060012       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:21.060045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.085732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:08:21.085852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.087674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:21.087770       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.239269       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1212 01:08:21.239670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.371938       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:08:21.371989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.388896       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1212 01:08:21.388963       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.477745       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1212 01:08:21.477796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1212 01:08:21.561472       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1212 01:08:21.561525       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1212 01:08:24.518529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 01:24:27 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:27.930756    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:24:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:33.173913    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966673173309284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:33.174277    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966673173309284,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:39 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:39.927340    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:24:43 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:43.175930    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966683175441721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:43 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:43.176012    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966683175441721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:50 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:50.926899    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:24:53 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:53.177430    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966693177103160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:24:53 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:24:53.177499    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966693177103160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:03 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:03.179361    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966703179152904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:03 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:03.179415    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966703179152904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:04 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:04.926982    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:25:13 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:13.181345    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966713181051390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:13 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:13.181401    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966713181051390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:19 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:19.927247    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:25:22 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:22.974524    2918 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 01:25:22 default-k8s-diff-port-076578 kubelet[2918]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 01:25:22 default-k8s-diff-port-076578 kubelet[2918]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 01:25:22 default-k8s-diff-port-076578 kubelet[2918]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 01:25:22 default-k8s-diff-port-076578 kubelet[2918]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 01:25:23 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:23.183976    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966723183668010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:23 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:23.184001    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966723183668010,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:31 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:31.927227    2918 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-dkmwp" podUID="ba79e06c-1471-43a1-9977-f8977b38fb46"
	Dec 12 01:25:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:33.188941    2918 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966733187212285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:25:33 default-k8s-diff-port-076578 kubelet[2918]: E1212 01:25:33.188978    2918 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966733187212285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [3f05fdc2ca6db5eb4fd7cb93253ffbd90deed9db01e5a37d602d57e817cf107b] <==
	I1212 01:08:29.773258       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 01:08:29.784207       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 01:08:29.784354       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 01:08:29.793105       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 01:08:29.793457       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-076578_c0950335-78bd-463b-800d-f691339a8e72!
	I1212 01:08:29.794440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd604350-6e37-45e9-9147-b066bd31081c", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-076578_c0950335-78bd-463b-800d-f691339a8e72 became leader
	I1212 01:08:29.893810       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-076578_c0950335-78bd-463b-800d-f691339a8e72!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-dkmwp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 describe pod metrics-server-6867b74b74-dkmwp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-076578 describe pod metrics-server-6867b74b74-dkmwp: exit status 1 (90.551925ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-dkmwp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-076578 describe pod metrics-server-6867b74b74-dkmwp: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (476.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (324.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-242725 -n no-preload-242725
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-12 01:23:45.735499433 +0000 UTC m=+6623.827167039
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-242725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-242725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.735µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-242725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-242725 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-242725 logs -n 25: (1.547644105s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC | 12 Dec 24 01:23 UTC |
	| start   | -p newest-cni-819544 --memory=2200 --alsologtostderr   | newest-cni-819544            | jenkins | v1.34.0 | 12 Dec 24 01:23 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 01:23:11
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 01:23:11.236278  148785 out.go:345] Setting OutFile to fd 1 ...
	I1212 01:23:11.236415  148785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 01:23:11.236426  148785 out.go:358] Setting ErrFile to fd 2...
	I1212 01:23:11.236431  148785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 01:23:11.236608  148785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 01:23:11.237202  148785 out.go:352] Setting JSON to false
	I1212 01:23:11.238223  148785 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":14733,"bootTime":1733951858,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 01:23:11.238323  148785 start.go:139] virtualization: kvm guest
	I1212 01:23:11.240561  148785 out.go:177] * [newest-cni-819544] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 01:23:11.242030  148785 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 01:23:11.242032  148785 notify.go:220] Checking for updates...
	I1212 01:23:11.243659  148785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 01:23:11.244921  148785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:23:11.246098  148785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:23:11.247397  148785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 01:23:11.248788  148785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 01:23:11.250735  148785 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:11.250896  148785 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:11.251034  148785 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:11.251164  148785 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 01:23:11.287643  148785 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 01:23:11.288861  148785 start.go:297] selected driver: kvm2
	I1212 01:23:11.288875  148785 start.go:901] validating driver "kvm2" against <nil>
	I1212 01:23:11.288892  148785 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 01:23:11.289599  148785 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:23:11.289666  148785 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 01:23:11.307855  148785 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 01:23:11.307934  148785 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1212 01:23:11.308011  148785 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1212 01:23:11.308377  148785 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1212 01:23:11.308415  148785 cni.go:84] Creating CNI manager for ""
	I1212 01:23:11.308476  148785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:23:11.308488  148785 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1212 01:23:11.308570  148785 start.go:340] cluster config:
	{Name:newest-cni-819544 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-819544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:11.308725  148785 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 01:23:11.310825  148785 out.go:177] * Starting "newest-cni-819544" primary control-plane node in "newest-cni-819544" cluster
	I1212 01:23:11.312254  148785 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:23:11.312296  148785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1212 01:23:11.312307  148785 cache.go:56] Caching tarball of preloaded images
	I1212 01:23:11.312441  148785 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 01:23:11.312452  148785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1212 01:23:11.312545  148785 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/config.json ...
	I1212 01:23:11.312562  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/config.json: {Name:mk8cfb89831c7850e9b4adc1a0f3ec13c7128f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:11.312716  148785 start.go:360] acquireMachinesLock for newest-cni-819544: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:23:11.312744  148785 start.go:364] duration metric: took 15.872µs to acquireMachinesLock for "newest-cni-819544"
	I1212 01:23:11.312761  148785 start.go:93] Provisioning new machine with config: &{Name:newest-cni-819544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-819544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:23:11.312818  148785 start.go:125] createHost starting for "" (driver="kvm2")
	I1212 01:23:11.315197  148785 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1212 01:23:11.315336  148785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:23:11.315377  148785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:23:11.329954  148785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43711
	I1212 01:23:11.330377  148785 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:23:11.330918  148785 main.go:141] libmachine: Using API Version  1
	I1212 01:23:11.330943  148785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:23:11.331292  148785 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:23:11.331479  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetMachineName
	I1212 01:23:11.331652  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:11.331820  148785 start.go:159] libmachine.API.Create for "newest-cni-819544" (driver="kvm2")
	I1212 01:23:11.331861  148785 client.go:168] LocalClient.Create starting
	I1212 01:23:11.331899  148785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem
	I1212 01:23:11.331935  148785 main.go:141] libmachine: Decoding PEM data...
	I1212 01:23:11.331957  148785 main.go:141] libmachine: Parsing certificate...
	I1212 01:23:11.332045  148785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem
	I1212 01:23:11.332074  148785 main.go:141] libmachine: Decoding PEM data...
	I1212 01:23:11.332090  148785 main.go:141] libmachine: Parsing certificate...
	I1212 01:23:11.332121  148785 main.go:141] libmachine: Running pre-create checks...
	I1212 01:23:11.332131  148785 main.go:141] libmachine: (newest-cni-819544) Calling .PreCreateCheck
	I1212 01:23:11.332471  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetConfigRaw
	I1212 01:23:11.332984  148785 main.go:141] libmachine: Creating machine...
	I1212 01:23:11.332999  148785 main.go:141] libmachine: (newest-cni-819544) Calling .Create
	I1212 01:23:11.333195  148785 main.go:141] libmachine: (newest-cni-819544) Creating KVM machine...
	I1212 01:23:11.334391  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found existing default KVM network
	I1212 01:23:11.335553  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.335420  148808 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f0:d6:61} reservation:<nil>}
	I1212 01:23:11.336385  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.336315  148808 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:2f:c1} reservation:<nil>}
	I1212 01:23:11.337096  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.337044  148808 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:23:af:a4} reservation:<nil>}
	I1212 01:23:11.338157  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.338091  148808 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289940}
	I1212 01:23:11.338246  148785 main.go:141] libmachine: (newest-cni-819544) DBG | created network xml: 
	I1212 01:23:11.338261  148785 main.go:141] libmachine: (newest-cni-819544) DBG | <network>
	I1212 01:23:11.338268  148785 main.go:141] libmachine: (newest-cni-819544) DBG |   <name>mk-newest-cni-819544</name>
	I1212 01:23:11.338275  148785 main.go:141] libmachine: (newest-cni-819544) DBG |   <dns enable='no'/>
	I1212 01:23:11.338285  148785 main.go:141] libmachine: (newest-cni-819544) DBG |   
	I1212 01:23:11.338297  148785 main.go:141] libmachine: (newest-cni-819544) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1212 01:23:11.338308  148785 main.go:141] libmachine: (newest-cni-819544) DBG |     <dhcp>
	I1212 01:23:11.338321  148785 main.go:141] libmachine: (newest-cni-819544) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1212 01:23:11.338329  148785 main.go:141] libmachine: (newest-cni-819544) DBG |     </dhcp>
	I1212 01:23:11.338334  148785 main.go:141] libmachine: (newest-cni-819544) DBG |   </ip>
	I1212 01:23:11.338339  148785 main.go:141] libmachine: (newest-cni-819544) DBG |   
	I1212 01:23:11.338353  148785 main.go:141] libmachine: (newest-cni-819544) DBG | </network>
	I1212 01:23:11.338358  148785 main.go:141] libmachine: (newest-cni-819544) DBG | 
	I1212 01:23:11.343893  148785 main.go:141] libmachine: (newest-cni-819544) DBG | trying to create private KVM network mk-newest-cni-819544 192.168.72.0/24...
	I1212 01:23:11.419968  148785 main.go:141] libmachine: (newest-cni-819544) DBG | private KVM network mk-newest-cni-819544 192.168.72.0/24 created
	I1212 01:23:11.420002  148785 main.go:141] libmachine: (newest-cni-819544) Setting up store path in /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544 ...
	I1212 01:23:11.420030  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.419847  148808 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:23:11.420051  148785 main.go:141] libmachine: (newest-cni-819544) Building disk image from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1212 01:23:11.420091  148785 main.go:141] libmachine: (newest-cni-819544) Downloading /home/jenkins/minikube-integration/20083-86355/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso...
	I1212 01:23:11.728151  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.727977  148808 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa...
	I1212 01:23:11.967891  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.967709  148808 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/newest-cni-819544.rawdisk...
	I1212 01:23:11.967931  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Writing magic tar header
	I1212 01:23:11.967948  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Writing SSH key tar header
	I1212 01:23:11.967962  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:11.967873  148808 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544 ...
	I1212 01:23:11.968071  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544
	I1212 01:23:11.968105  148785 main.go:141] libmachine: (newest-cni-819544) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544 (perms=drwx------)
	I1212 01:23:11.968117  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube/machines
	I1212 01:23:11.968138  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 01:23:11.968152  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20083-86355
	I1212 01:23:11.968168  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1212 01:23:11.968188  148785 main.go:141] libmachine: (newest-cni-819544) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube/machines (perms=drwxr-xr-x)
	I1212 01:23:11.968209  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home/jenkins
	I1212 01:23:11.968241  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Checking permissions on dir: /home
	I1212 01:23:11.968254  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Skipping /home - not owner
	I1212 01:23:11.968283  148785 main.go:141] libmachine: (newest-cni-819544) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355/.minikube (perms=drwxr-xr-x)
	I1212 01:23:11.968326  148785 main.go:141] libmachine: (newest-cni-819544) Setting executable bit set on /home/jenkins/minikube-integration/20083-86355 (perms=drwxrwxr-x)
	I1212 01:23:11.968344  148785 main.go:141] libmachine: (newest-cni-819544) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1212 01:23:11.968356  148785 main.go:141] libmachine: (newest-cni-819544) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1212 01:23:11.968381  148785 main.go:141] libmachine: (newest-cni-819544) Creating domain...
	I1212 01:23:11.969511  148785 main.go:141] libmachine: (newest-cni-819544) define libvirt domain using xml: 
	I1212 01:23:11.969533  148785 main.go:141] libmachine: (newest-cni-819544) <domain type='kvm'>
	I1212 01:23:11.969548  148785 main.go:141] libmachine: (newest-cni-819544)   <name>newest-cni-819544</name>
	I1212 01:23:11.969557  148785 main.go:141] libmachine: (newest-cni-819544)   <memory unit='MiB'>2200</memory>
	I1212 01:23:11.969565  148785 main.go:141] libmachine: (newest-cni-819544)   <vcpu>2</vcpu>
	I1212 01:23:11.969580  148785 main.go:141] libmachine: (newest-cni-819544)   <features>
	I1212 01:23:11.969596  148785 main.go:141] libmachine: (newest-cni-819544)     <acpi/>
	I1212 01:23:11.969603  148785 main.go:141] libmachine: (newest-cni-819544)     <apic/>
	I1212 01:23:11.969618  148785 main.go:141] libmachine: (newest-cni-819544)     <pae/>
	I1212 01:23:11.969628  148785 main.go:141] libmachine: (newest-cni-819544)     
	I1212 01:23:11.969655  148785 main.go:141] libmachine: (newest-cni-819544)   </features>
	I1212 01:23:11.969680  148785 main.go:141] libmachine: (newest-cni-819544)   <cpu mode='host-passthrough'>
	I1212 01:23:11.969690  148785 main.go:141] libmachine: (newest-cni-819544)   
	I1212 01:23:11.969701  148785 main.go:141] libmachine: (newest-cni-819544)   </cpu>
	I1212 01:23:11.969709  148785 main.go:141] libmachine: (newest-cni-819544)   <os>
	I1212 01:23:11.969720  148785 main.go:141] libmachine: (newest-cni-819544)     <type>hvm</type>
	I1212 01:23:11.969740  148785 main.go:141] libmachine: (newest-cni-819544)     <boot dev='cdrom'/>
	I1212 01:23:11.969754  148785 main.go:141] libmachine: (newest-cni-819544)     <boot dev='hd'/>
	I1212 01:23:11.969766  148785 main.go:141] libmachine: (newest-cni-819544)     <bootmenu enable='no'/>
	I1212 01:23:11.969775  148785 main.go:141] libmachine: (newest-cni-819544)   </os>
	I1212 01:23:11.969783  148785 main.go:141] libmachine: (newest-cni-819544)   <devices>
	I1212 01:23:11.969795  148785 main.go:141] libmachine: (newest-cni-819544)     <disk type='file' device='cdrom'>
	I1212 01:23:11.969819  148785 main.go:141] libmachine: (newest-cni-819544)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/boot2docker.iso'/>
	I1212 01:23:11.969839  148785 main.go:141] libmachine: (newest-cni-819544)       <target dev='hdc' bus='scsi'/>
	I1212 01:23:11.969851  148785 main.go:141] libmachine: (newest-cni-819544)       <readonly/>
	I1212 01:23:11.969861  148785 main.go:141] libmachine: (newest-cni-819544)     </disk>
	I1212 01:23:11.969877  148785 main.go:141] libmachine: (newest-cni-819544)     <disk type='file' device='disk'>
	I1212 01:23:11.969889  148785 main.go:141] libmachine: (newest-cni-819544)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1212 01:23:11.969902  148785 main.go:141] libmachine: (newest-cni-819544)       <source file='/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/newest-cni-819544.rawdisk'/>
	I1212 01:23:11.969918  148785 main.go:141] libmachine: (newest-cni-819544)       <target dev='hda' bus='virtio'/>
	I1212 01:23:11.969930  148785 main.go:141] libmachine: (newest-cni-819544)     </disk>
	I1212 01:23:11.969937  148785 main.go:141] libmachine: (newest-cni-819544)     <interface type='network'>
	I1212 01:23:11.969947  148785 main.go:141] libmachine: (newest-cni-819544)       <source network='mk-newest-cni-819544'/>
	I1212 01:23:11.969957  148785 main.go:141] libmachine: (newest-cni-819544)       <model type='virtio'/>
	I1212 01:23:11.969965  148785 main.go:141] libmachine: (newest-cni-819544)     </interface>
	I1212 01:23:11.969975  148785 main.go:141] libmachine: (newest-cni-819544)     <interface type='network'>
	I1212 01:23:11.970008  148785 main.go:141] libmachine: (newest-cni-819544)       <source network='default'/>
	I1212 01:23:11.970032  148785 main.go:141] libmachine: (newest-cni-819544)       <model type='virtio'/>
	I1212 01:23:11.970042  148785 main.go:141] libmachine: (newest-cni-819544)     </interface>
	I1212 01:23:11.970051  148785 main.go:141] libmachine: (newest-cni-819544)     <serial type='pty'>
	I1212 01:23:11.970059  148785 main.go:141] libmachine: (newest-cni-819544)       <target port='0'/>
	I1212 01:23:11.970069  148785 main.go:141] libmachine: (newest-cni-819544)     </serial>
	I1212 01:23:11.970078  148785 main.go:141] libmachine: (newest-cni-819544)     <console type='pty'>
	I1212 01:23:11.970093  148785 main.go:141] libmachine: (newest-cni-819544)       <target type='serial' port='0'/>
	I1212 01:23:11.970104  148785 main.go:141] libmachine: (newest-cni-819544)     </console>
	I1212 01:23:11.970116  148785 main.go:141] libmachine: (newest-cni-819544)     <rng model='virtio'>
	I1212 01:23:11.970127  148785 main.go:141] libmachine: (newest-cni-819544)       <backend model='random'>/dev/random</backend>
	I1212 01:23:11.970135  148785 main.go:141] libmachine: (newest-cni-819544)     </rng>
	I1212 01:23:11.970143  148785 main.go:141] libmachine: (newest-cni-819544)     
	I1212 01:23:11.970153  148785 main.go:141] libmachine: (newest-cni-819544)     
	I1212 01:23:11.970161  148785 main.go:141] libmachine: (newest-cni-819544)   </devices>
	I1212 01:23:11.970170  148785 main.go:141] libmachine: (newest-cni-819544) </domain>
	I1212 01:23:11.970180  148785 main.go:141] libmachine: (newest-cni-819544) 
	I1212 01:23:11.974507  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:ca:06:61 in network default
	I1212 01:23:11.975148  148785 main.go:141] libmachine: (newest-cni-819544) Ensuring networks are active...
	I1212 01:23:11.975174  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:11.975838  148785 main.go:141] libmachine: (newest-cni-819544) Ensuring network default is active
	I1212 01:23:11.976158  148785 main.go:141] libmachine: (newest-cni-819544) Ensuring network mk-newest-cni-819544 is active
	I1212 01:23:11.976661  148785 main.go:141] libmachine: (newest-cni-819544) Getting domain xml...
	I1212 01:23:11.977290  148785 main.go:141] libmachine: (newest-cni-819544) Creating domain...
	I1212 01:23:13.216441  148785 main.go:141] libmachine: (newest-cni-819544) Waiting to get IP...
	I1212 01:23:13.217494  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:13.217909  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:13.217967  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:13.217920  148808 retry.go:31] will retry after 221.82847ms: waiting for machine to come up
	I1212 01:23:13.441264  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:13.441779  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:13.441802  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:13.441730  148808 retry.go:31] will retry after 289.017451ms: waiting for machine to come up
	I1212 01:23:13.732187  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:13.732659  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:13.732690  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:13.732605  148808 retry.go:31] will retry after 307.639821ms: waiting for machine to come up
	I1212 01:23:14.042052  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:14.042619  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:14.042647  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:14.042567  148808 retry.go:31] will retry after 428.27523ms: waiting for machine to come up
	I1212 01:23:14.472176  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:14.472595  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:14.472624  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:14.472527  148808 retry.go:31] will retry after 754.114509ms: waiting for machine to come up
	I1212 01:23:15.227889  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:15.228344  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:15.228394  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:15.228301  148808 retry.go:31] will retry after 938.715739ms: waiting for machine to come up
	I1212 01:23:16.168388  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:16.168917  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:16.168947  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:16.168866  148808 retry.go:31] will retry after 972.560105ms: waiting for machine to come up
	I1212 01:23:17.142960  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:17.143671  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:17.143703  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:17.143616  148808 retry.go:31] will retry after 1.239269872s: waiting for machine to come up
	I1212 01:23:18.384733  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:18.385274  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:18.385306  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:18.385220  148808 retry.go:31] will retry after 1.587203716s: waiting for machine to come up
	I1212 01:23:19.973981  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:19.974392  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:19.974421  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:19.974341  148808 retry.go:31] will retry after 1.780800613s: waiting for machine to come up
	I1212 01:23:21.757253  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:21.757718  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:21.757758  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:21.757711  148808 retry.go:31] will retry after 2.248349443s: waiting for machine to come up
	I1212 01:23:24.008908  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:24.009471  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:24.009502  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:24.009407  148808 retry.go:31] will retry after 2.507531697s: waiting for machine to come up
	I1212 01:23:26.518813  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:26.519367  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:26.519398  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:26.519321  148808 retry.go:31] will retry after 3.133105222s: waiting for machine to come up
	I1212 01:23:29.654288  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:29.654712  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find current IP address of domain newest-cni-819544 in network mk-newest-cni-819544
	I1212 01:23:29.654737  148785 main.go:141] libmachine: (newest-cni-819544) DBG | I1212 01:23:29.654669  148808 retry.go:31] will retry after 5.514320985s: waiting for machine to come up
	I1212 01:23:35.173155  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.173550  148785 main.go:141] libmachine: (newest-cni-819544) Found IP for machine: 192.168.72.217
	I1212 01:23:35.173572  148785 main.go:141] libmachine: (newest-cni-819544) Reserving static IP address...
	I1212 01:23:35.173587  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has current primary IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.173947  148785 main.go:141] libmachine: (newest-cni-819544) DBG | unable to find host DHCP lease matching {name: "newest-cni-819544", mac: "52:54:00:0d:44:40", ip: "192.168.72.217"} in network mk-newest-cni-819544
	I1212 01:23:35.250901  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Getting to WaitForSSH function...
	I1212 01:23:35.250931  148785 main.go:141] libmachine: (newest-cni-819544) Reserved static IP address: 192.168.72.217
	I1212 01:23:35.250943  148785 main.go:141] libmachine: (newest-cni-819544) Waiting for SSH to be available...
	I1212 01:23:35.253698  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.254104  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.254135  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.254308  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Using SSH client type: external
	I1212 01:23:35.254337  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa (-rw-------)
	I1212 01:23:35.254388  148785 main.go:141] libmachine: (newest-cni-819544) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:23:35.254410  148785 main.go:141] libmachine: (newest-cni-819544) DBG | About to run SSH command:
	I1212 01:23:35.254428  148785 main.go:141] libmachine: (newest-cni-819544) DBG | exit 0
	I1212 01:23:35.376778  148785 main.go:141] libmachine: (newest-cni-819544) DBG | SSH cmd err, output: <nil>: 
	I1212 01:23:35.377062  148785 main.go:141] libmachine: (newest-cni-819544) KVM machine creation complete!
	I1212 01:23:35.377396  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetConfigRaw
	I1212 01:23:35.377990  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:35.378194  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:35.378432  148785 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1212 01:23:35.378450  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetState
	I1212 01:23:35.380013  148785 main.go:141] libmachine: Detecting operating system of created instance...
	I1212 01:23:35.380049  148785 main.go:141] libmachine: Waiting for SSH to be available...
	I1212 01:23:35.380065  148785 main.go:141] libmachine: Getting to WaitForSSH function...
	I1212 01:23:35.380074  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:35.382567  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.382891  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.382919  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.383112  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:35.383276  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.383447  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.383613  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:35.383751  148785 main.go:141] libmachine: Using SSH client type: native
	I1212 01:23:35.383995  148785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 01:23:35.384007  148785 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1212 01:23:35.483030  148785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:23:35.483074  148785 main.go:141] libmachine: Detecting the provisioner...
	I1212 01:23:35.483087  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:35.485869  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.486228  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.486254  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.486450  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:35.486651  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.486808  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.486961  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:35.487120  148785 main.go:141] libmachine: Using SSH client type: native
	I1212 01:23:35.487322  148785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 01:23:35.487334  148785 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1212 01:23:35.592755  148785 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1212 01:23:35.592856  148785 main.go:141] libmachine: found compatible host: buildroot
	I1212 01:23:35.592872  148785 main.go:141] libmachine: Provisioning with buildroot...
	I1212 01:23:35.592882  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetMachineName
	I1212 01:23:35.593203  148785 buildroot.go:166] provisioning hostname "newest-cni-819544"
	I1212 01:23:35.593234  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetMachineName
	I1212 01:23:35.593468  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:35.596163  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.596517  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.596551  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.596673  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:35.596867  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.597050  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.597210  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:35.597400  148785 main.go:141] libmachine: Using SSH client type: native
	I1212 01:23:35.597600  148785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 01:23:35.597613  148785 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-819544 && echo "newest-cni-819544" | sudo tee /etc/hostname
	I1212 01:23:35.710972  148785 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-819544
	
	I1212 01:23:35.711012  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:35.714077  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.714541  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.714573  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.714743  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:35.714941  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.715117  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:35.715262  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:35.715445  148785 main.go:141] libmachine: Using SSH client type: native
	I1212 01:23:35.715653  148785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 01:23:35.715670  148785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-819544' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-819544/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-819544' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:23:35.825162  148785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:23:35.825199  148785 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:23:35.825242  148785 buildroot.go:174] setting up certificates
	I1212 01:23:35.825255  148785 provision.go:84] configureAuth start
	I1212 01:23:35.825268  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetMachineName
	I1212 01:23:35.825522  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetIP
	I1212 01:23:35.828256  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.828592  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.828634  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.828807  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:35.830955  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.831210  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:35.831238  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:35.831330  148785 provision.go:143] copyHostCerts
	I1212 01:23:35.831401  148785 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:23:35.831426  148785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:23:35.831510  148785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:23:35.831721  148785 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:23:35.831734  148785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:23:35.831776  148785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:23:35.831871  148785 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:23:35.831883  148785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:23:35.831917  148785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:23:35.831999  148785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.newest-cni-819544 san=[127.0.0.1 192.168.72.217 localhost minikube newest-cni-819544]
	I1212 01:23:36.091040  148785 provision.go:177] copyRemoteCerts
	I1212 01:23:36.091110  148785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:23:36.091136  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:36.094096  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.094422  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.094453  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.094688  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:36.094868  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.095014  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:36.095130  148785 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa Username:docker}
	I1212 01:23:36.178491  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:23:36.207325  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:23:36.234962  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:23:36.262608  148785 provision.go:87] duration metric: took 437.334759ms to configureAuth
	I1212 01:23:36.262639  148785 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:23:36.262821  148785 config.go:182] Loaded profile config "newest-cni-819544": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:23:36.262922  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:36.265564  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.265968  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.265997  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.266209  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:36.266375  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.266526  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.266698  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:36.266883  148785 main.go:141] libmachine: Using SSH client type: native
	I1212 01:23:36.267063  148785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 01:23:36.267079  148785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:23:36.490148  148785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:23:36.490178  148785 main.go:141] libmachine: Checking connection to Docker...
	I1212 01:23:36.490188  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetURL
	I1212 01:23:36.491421  148785 main.go:141] libmachine: (newest-cni-819544) DBG | Using libvirt version 6000000
	I1212 01:23:36.493914  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.494293  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.494327  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.494468  148785 main.go:141] libmachine: Docker is up and running!
	I1212 01:23:36.494485  148785 main.go:141] libmachine: Reticulating splines...
	I1212 01:23:36.494494  148785 client.go:171] duration metric: took 25.162620677s to LocalClient.Create
	I1212 01:23:36.494521  148785 start.go:167] duration metric: took 25.162701934s to libmachine.API.Create "newest-cni-819544"
	I1212 01:23:36.494535  148785 start.go:293] postStartSetup for "newest-cni-819544" (driver="kvm2")
	I1212 01:23:36.494549  148785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:23:36.494576  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:36.494793  148785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:23:36.494832  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:36.497307  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.497622  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.497661  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.497779  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:36.497956  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.498106  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:36.498216  148785 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa Username:docker}
	I1212 01:23:36.579403  148785 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:23:36.584403  148785 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:23:36.584437  148785 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:23:36.584531  148785 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:23:36.584644  148785 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:23:36.584773  148785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:23:36.595081  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:23:36.621197  148785 start.go:296] duration metric: took 126.646203ms for postStartSetup
	I1212 01:23:36.621290  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetConfigRaw
	I1212 01:23:36.621890  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetIP
	I1212 01:23:36.624640  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.625001  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.625029  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.625242  148785 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/config.json ...
	I1212 01:23:36.625468  148785 start.go:128] duration metric: took 25.31263535s to createHost
	I1212 01:23:36.625499  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:36.628095  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.628483  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.628510  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.628736  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:36.628965  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.629159  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.629337  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:36.629491  148785 main.go:141] libmachine: Using SSH client type: native
	I1212 01:23:36.629650  148785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.217 22 <nil> <nil>}
	I1212 01:23:36.629659  148785 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:23:36.736792  148785 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733966616.711284370
	
	I1212 01:23:36.736818  148785 fix.go:216] guest clock: 1733966616.711284370
	I1212 01:23:36.736828  148785 fix.go:229] Guest: 2024-12-12 01:23:36.71128437 +0000 UTC Remote: 2024-12-12 01:23:36.625483118 +0000 UTC m=+25.430447653 (delta=85.801252ms)
	I1212 01:23:36.736854  148785 fix.go:200] guest clock delta is within tolerance: 85.801252ms
	I1212 01:23:36.736862  148785 start.go:83] releasing machines lock for "newest-cni-819544", held for 25.424107004s
	I1212 01:23:36.736891  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:36.737127  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetIP
	I1212 01:23:36.739830  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.740147  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.740176  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.740387  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:36.740861  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:36.741017  148785 main.go:141] libmachine: (newest-cni-819544) Calling .DriverName
	I1212 01:23:36.741085  148785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:23:36.741146  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:36.741217  148785 ssh_runner.go:195] Run: cat /version.json
	I1212 01:23:36.741245  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHHostname
	I1212 01:23:36.743681  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.743992  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.744025  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.744049  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.744209  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:36.744370  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.744409  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:36.744437  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:36.744505  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:36.744572  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHPort
	I1212 01:23:36.744654  148785 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa Username:docker}
	I1212 01:23:36.744996  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHKeyPath
	I1212 01:23:36.745125  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetSSHUsername
	I1212 01:23:36.745252  148785 sshutil.go:53] new ssh client: &{IP:192.168.72.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/newest-cni-819544/id_rsa Username:docker}
	I1212 01:23:36.850871  148785 ssh_runner.go:195] Run: systemctl --version
	I1212 01:23:36.857267  148785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:23:37.017380  148785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:23:37.024925  148785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:23:37.025002  148785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:23:37.041978  148785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:23:37.042008  148785 start.go:495] detecting cgroup driver to use...
	I1212 01:23:37.042101  148785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:23:37.059863  148785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:23:37.074679  148785 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:23:37.074732  148785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:23:37.090216  148785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:23:37.104302  148785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:23:37.232661  148785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:23:37.371043  148785 docker.go:233] disabling docker service ...
	I1212 01:23:37.371108  148785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:23:37.386672  148785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:23:37.400718  148785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:23:37.544659  148785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:23:37.667001  148785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:23:37.681997  148785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:23:37.702224  148785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:23:37.702285  148785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.714302  148785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:23:37.714376  148785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.724908  148785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.735350  148785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.745868  148785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:23:37.756701  148785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.767631  148785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.785371  148785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:23:37.795612  148785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:23:37.805426  148785 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:23:37.805474  148785 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:23:37.818656  148785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:23:37.830402  148785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:37.954182  148785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:23:38.061165  148785 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:23:38.061251  148785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:23:38.067532  148785 start.go:563] Will wait 60s for crictl version
	I1212 01:23:38.067623  148785 ssh_runner.go:195] Run: which crictl
	I1212 01:23:38.072273  148785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:23:38.111338  148785 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:23:38.111428  148785 ssh_runner.go:195] Run: crio --version
	I1212 01:23:38.140831  148785 ssh_runner.go:195] Run: crio --version
	I1212 01:23:38.172779  148785 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:23:38.174034  148785 main.go:141] libmachine: (newest-cni-819544) Calling .GetIP
	I1212 01:23:38.176698  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:38.177030  148785 main.go:141] libmachine: (newest-cni-819544) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:44:40", ip: ""} in network mk-newest-cni-819544: {Iface:virbr4 ExpiryTime:2024-12-12 02:23:27 +0000 UTC Type:0 Mac:52:54:00:0d:44:40 Iaid: IPaddr:192.168.72.217 Prefix:24 Hostname:newest-cni-819544 Clientid:01:52:54:00:0d:44:40}
	I1212 01:23:38.177064  148785 main.go:141] libmachine: (newest-cni-819544) DBG | domain newest-cni-819544 has defined IP address 192.168.72.217 and MAC address 52:54:00:0d:44:40 in network mk-newest-cni-819544
	I1212 01:23:38.177301  148785 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:23:38.181770  148785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:38.197897  148785 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1212 01:23:38.199154  148785 kubeadm.go:883] updating cluster {Name:newest-cni-819544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:newest-cni-819544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:23:38.199274  148785 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:23:38.199331  148785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:38.237801  148785 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:23:38.237865  148785 ssh_runner.go:195] Run: which lz4
	I1212 01:23:38.242148  148785 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:23:38.246410  148785 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:23:38.246432  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:23:39.696510  148785 crio.go:462] duration metric: took 1.454389158s to copy over tarball
	I1212 01:23:39.696600  148785 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:23:41.781690  148785 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.085054075s)
	I1212 01:23:41.781726  148785 crio.go:469] duration metric: took 2.085177691s to extract the tarball
	I1212 01:23:41.781736  148785 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:23:41.820515  148785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:23:41.870656  148785 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:23:41.870682  148785 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:23:41.870693  148785 kubeadm.go:934] updating node { 192.168.72.217 8443 v1.31.2 crio true true} ...
	I1212 01:23:41.870848  148785 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-819544 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:newest-cni-819544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:23:41.870936  148785 ssh_runner.go:195] Run: crio config
	I1212 01:23:41.929848  148785 cni.go:84] Creating CNI manager for ""
	I1212 01:23:41.929877  148785 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:23:41.929890  148785 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1212 01:23:41.929923  148785 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.217 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-819544 NodeName:newest-cni-819544 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:23:41.930082  148785 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-819544"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:23:41.930149  148785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:23:41.940587  148785 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:23:41.940670  148785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:23:41.949977  148785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1212 01:23:41.966726  148785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:23:41.986830  148785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2487 bytes)
	I1212 01:23:42.004322  148785 ssh_runner.go:195] Run: grep 192.168.72.217	control-plane.minikube.internal$ /etc/hosts
	I1212 01:23:42.008526  148785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:23:42.021503  148785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:23:42.148594  148785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:23:42.167840  148785 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544 for IP: 192.168.72.217
	I1212 01:23:42.167868  148785 certs.go:194] generating shared ca certs ...
	I1212 01:23:42.167884  148785 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.168051  148785 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:23:42.168109  148785 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:23:42.168122  148785 certs.go:256] generating profile certs ...
	I1212 01:23:42.168208  148785 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/client.key
	I1212 01:23:42.168230  148785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/client.crt with IP's: []
	I1212 01:23:42.351439  148785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/client.crt ...
	I1212 01:23:42.351472  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/client.crt: {Name:mk01f7322584a7b882e79f122049c65993dea6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.351700  148785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/client.key ...
	I1212 01:23:42.351717  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/client.key: {Name:mk233de052b52ea7a7db5e6f925c5fa9fda8dd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.351860  148785 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.key.7afe27c5
	I1212 01:23:42.351886  148785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.crt.7afe27c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.217]
	I1212 01:23:42.474477  148785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.crt.7afe27c5 ...
	I1212 01:23:42.474514  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.crt.7afe27c5: {Name:mkfaa7888dac692147ed9bb941306d08ae72baa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.474698  148785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.key.7afe27c5 ...
	I1212 01:23:42.474717  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.key.7afe27c5: {Name:mkdb8e2b35d968ed700aca64cabe610b0e7b088a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.474818  148785 certs.go:381] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.crt.7afe27c5 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.crt
	I1212 01:23:42.474925  148785 certs.go:385] copying /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.key.7afe27c5 -> /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.key
	I1212 01:23:42.475014  148785 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.key
	I1212 01:23:42.475037  148785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.crt with IP's: []
	I1212 01:23:42.615779  148785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.crt ...
	I1212 01:23:42.615814  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.crt: {Name:mk05a79b2882e9ceab85118ef01719d690f8eff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.615996  148785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.key ...
	I1212 01:23:42.616014  148785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.key: {Name:mk1621424090abf2739813ff0d49c0252360e218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:23:42.616212  148785 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:23:42.616263  148785 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:23:42.616279  148785 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:23:42.616315  148785 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:23:42.616348  148785 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:23:42.616384  148785 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:23:42.616439  148785 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:23:42.617116  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:23:42.644843  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:23:42.671589  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:23:42.697070  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:23:42.722507  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:23:42.747290  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:23:42.772845  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:23:42.797900  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/newest-cni-819544/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:23:42.823531  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:23:42.849064  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:23:42.874519  148785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:23:42.899621  148785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:23:42.917167  148785 ssh_runner.go:195] Run: openssl version
	I1212 01:23:42.923165  148785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:23:42.938544  148785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:23:42.955470  148785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:23:42.955542  148785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:23:42.969079  148785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:23:42.993167  148785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:23:43.006977  148785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:23:43.011658  148785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:23:43.011727  148785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:23:43.019054  148785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:23:43.036107  148785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:23:43.049323  148785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:43.053833  148785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:43.053896  148785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:23:43.059843  148785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:23:43.071379  148785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:23:43.076092  148785 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 01:23:43.076166  148785 kubeadm.go:392] StartCluster: {Name:newest-cni-819544 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:newest-cni-819544 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:23:43.076250  148785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:23:43.076289  148785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:23:43.113584  148785 cri.go:89] found id: ""
	I1212 01:23:43.113666  148785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:23:43.124917  148785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:23:43.140360  148785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:23:43.155804  148785 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:23:43.155828  148785 kubeadm.go:157] found existing configuration files:
	
	I1212 01:23:43.155885  148785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:23:43.167529  148785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:23:43.167628  148785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:23:43.178658  148785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:23:43.188785  148785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:23:43.188858  148785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:23:43.199747  148785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:23:43.210939  148785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:23:43.211004  148785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:23:43.222388  148785 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:23:43.233607  148785 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:23:43.233673  148785 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:23:43.245303  148785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:23:43.361371  148785 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:23:43.361442  148785 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:23:43.473626  148785 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:23:43.473766  148785 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:23:43.473907  148785 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:23:43.486529  148785 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:23:43.566169  148785 out.go:235]   - Generating certificates and keys ...
	I1212 01:23:43.566335  148785 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:23:43.566465  148785 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:23:43.603686  148785 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 01:23:43.794850  148785 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1212 01:23:44.036482  148785 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1212 01:23:44.275012  148785 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1212 01:23:44.428224  148785 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1212 01:23:44.428459  148785 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-819544] and IPs [192.168.72.217 127.0.0.1 ::1]
	I1212 01:23:44.821802  148785 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1212 01:23:44.822123  148785 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-819544] and IPs [192.168.72.217 127.0.0.1 ::1]
	I1212 01:23:44.886451  148785 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 01:23:45.062325  148785 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 01:23:45.285305  148785 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1212 01:23:45.285595  148785 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:23:45.522683  148785 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:23:45.637887  148785 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:23:46.033434  148785 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	
	
	==> CRI-O <==
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.621936579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58c291fc-53a0-45d3-8def-9f6cbb8cd75d name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.623347121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c931066c-63b3-4d85-a005-e0ae79b5c2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.623713640Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966626623691180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c931066c-63b3-4d85-a005-e0ae79b5c2b8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.624247359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae9854a9-a221-4da7-b9c6-8c1d29420b9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.624339948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae9854a9-a221-4da7-b9c6-8c1d29420b9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.624547049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae9854a9-a221-4da7-b9c6-8c1d29420b9f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.668566159Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1ee9dbc-e253-4a45-a997-6c0fdfc6da23 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.668658967Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1ee9dbc-e253-4a45-a997-6c0fdfc6da23 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.669951888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d74ff560-6a4b-4465-8f11-9a527771c7e1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.671855025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966626670433381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d74ff560-6a4b-4465-8f11-9a527771c7e1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.672682464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d635004-fc0e-4db9-92d9-09882ef1cd92 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.672761774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d635004-fc0e-4db9-92d9-09882ef1cd92 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.672962553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d635004-fc0e-4db9-92d9-09882ef1cd92 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.676405483Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39ad5409-5b9a-4b20-a05e-f475a2a1615a name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.676753076Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:76e9f3eb-72ea-49a3-9711-6a5f98455322,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965756196038961,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-12T01:09:15.886194316Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:24b3e1d3821b25d90479b4c01b6e2e5ae545cc71b3fa5ff866f221c03dd1fa1b,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-m2g6s,Uid:b0879479-4335-4782-b15a-83f22d85139e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965756121835353,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-m2g6s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0879479-4335-4782-b15a-83f22d85139e
,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T01:09:15.814563455Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tflp9,Uid:edfd3f91-47ce-497c-ae3f-2c200e084be5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965754406612722,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T01:09:14.093876786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-kv2c6,Uid:39249ae0-a54d-455d-
a2ce-870c71fd2c03,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965754385747361,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39249ae0-a54d-455d-a2ce-870c71fd2c03,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T01:09:14.067382601Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&PodSandboxMetadata{Name:kube-proxy-5kc2s,Uid:965f5b8a-25d3-40ed-89ee-9a4450864b73,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965754298978445,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-12T01:09:13.972015669Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-242725,Uid:a69ee55bd7675d76a7c1425bb0aa449b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733965743279138857,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.222:8443,kubernetes.io/config.hash: a69ee55bd7675d76a7c1425bb0aa449b,kubernetes.io/config.seen: 2024-12-12T01:09:02.801258188Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:982a319e60e773bc385fb86d7d5377b
ff886393b0192657a0f5c2abd87383ff3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-242725,Uid:1d1e8279f2b34dcb46a85008c3372a4a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965743274834995,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d1e8279f2b34dcb46a85008c3372a4a,kubernetes.io/config.seen: 2024-12-12T01:09:02.801259922Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-242725,Uid:a90786d8e7ea5d3c677a60c394359483,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965743255965868,Labels:map[string]string{component: kube-sch
eduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a90786d8e7ea5d3c677a60c394359483,kubernetes.io/config.seen: 2024-12-12T01:09:02.801261349Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-242725,Uid:56f3285f1696251d232d3261ca96bbf8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733965743247230963,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.222:237
9,kubernetes.io/config.hash: 56f3285f1696251d232d3261ca96bbf8,kubernetes.io/config.seen: 2024-12-12T01:09:02.801251012Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=39ad5409-5b9a-4b20-a05e-f475a2a1615a name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.677595769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64921ff9-99f9-4892-afb3-c03c429b1992 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.677648136Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64921ff9-99f9-4892-afb3-c03c429b1992 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.677814718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64921ff9-99f9-4892-afb3-c03c429b1992 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.713281429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b8c791f4-a0a3-4221-80ae-3a22fd69d309 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.713354899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b8c791f4-a0a3-4221-80ae-3a22fd69d309 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.715021425Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8bb978b-e26f-4aa8-b040-a7f94c69f5f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.715425017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966626715399928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8bb978b-e26f-4aa8-b040-a7f94c69f5f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.715943793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8fc7b26-3dee-4af1-b6b5-e9d7823db35f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.715998158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8fc7b26-3dee-4af1-b6b5-e9d7823db35f name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:46 no-preload-242725 crio[713]: time="2024-12-12 01:23:46.716266632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8,PodSandboxId:7026f323931fe0ae5f16553f4e1bd4a0120b29c62b3d165dbc36743c00763383,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733965756316642871,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e9f3eb-72ea-49a3-9711-6a5f98455322,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca,PodSandboxId:66c6b1125b94145cb09476d3233e79ff450545f828d03678c8bf9c91bcd64c2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755432421630,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tflp9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edfd3f91-47ce-497c-ae3f-2c200e084be5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61,PodSandboxId:5042075143d40fb53d93f1788495ca17778874c20c02d7bb6edc63c6ede2fad2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733965755308395051,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-kv2c6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39
249ae0-a54d-455d-a2ce-870c71fd2c03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c,PodSandboxId:d725928490c8d81347296554d7382ee208a52de3aadbd90638bab937898640b4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733965754528339261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5kc2s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 965f5b8a-25d3-40ed-89ee-9a4450864b73,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17,PodSandboxId:a3676ecf3bb15da1db9567ed3c6824051fd4c539cfe8580f0b0e91e9d402ffb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733965743557492810,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f3285f1696251d232d3261ca96bbf8,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6,PodSandboxId:cd5a87dc431f1331e4e6dbce6bb3f9339505f285fbaaf01d10a6c72be204273f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733965743494303008,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a90786d8e7ea5d3c677a60c394359483,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61,PodSandboxId:a466fb0c255174e380354bed36582b559fd575001b17748f204019745d9f928e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733965743464966518,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6,PodSandboxId:982a319e60e773bc385fb86d7d5377bff886393b0192657a0f5c2abd87383ff3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733965743431872392,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1e8279f2b34dcb46a85008c3372a4a,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef,PodSandboxId:128809195d8a84b211ffc74302c9106482d1af585ec0aa274a2cb18f4dceee3a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733965461052531803,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-242725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a69ee55bd7675d76a7c1425bb0aa449b,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8fc7b26-3dee-4af1-b6b5-e9d7823db35f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff7827fa37f22       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   7026f323931fe       storage-provisioner
	60b32269243ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   66c6b1125b941       coredns-7c65d6cfc9-tflp9
	a1789431d93d4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   5042075143d40       coredns-7c65d6cfc9-kv2c6
	a94f000b28034       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   14 minutes ago      Running             kube-proxy                0                   d725928490c8d       kube-proxy-5kc2s
	fbdb347f38c02       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   a3676ecf3bb15       etcd-no-preload-242725
	11444b2efab69       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   14 minutes ago      Running             kube-scheduler            2                   cd5a87dc431f1       kube-scheduler-no-preload-242725
	dd1e0c805d800       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Running             kube-apiserver            2                   a466fb0c25517       kube-apiserver-no-preload-242725
	ccee5585bfc48       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   14 minutes ago      Running             kube-controller-manager   2                   982a319e60e77       kube-controller-manager-no-preload-242725
	3d7c8b9818dc2       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   19 minutes ago      Exited              kube-apiserver            1                   128809195d8a8       kube-apiserver-no-preload-242725
	
	
	==> coredns [60b32269243bac8cf558abe63266e3ea2c125cd79615e6be542cedbf8ef459ca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [a1789431d93d4adc654763c152d13e22aef046f42a9c13ed5438e1db74128c61] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-242725
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-242725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458
	                    minikube.k8s.io/name=no-preload-242725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Dec 2024 01:09:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-242725
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Dec 2024 01:23:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Dec 2024 01:19:29 +0000   Thu, 12 Dec 2024 01:09:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Dec 2024 01:19:29 +0000   Thu, 12 Dec 2024 01:09:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Dec 2024 01:19:29 +0000   Thu, 12 Dec 2024 01:09:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Dec 2024 01:19:29 +0000   Thu, 12 Dec 2024 01:09:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.222
	  Hostname:    no-preload-242725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d23c4d5b575b461683e971eeb726b8b7
	  System UUID:                d23c4d5b-575b-4616-83e9-71eeb726b8b7
	  Boot ID:                    65fa1cdf-a3ab-41b8-8a92-f83d8d596f20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-kv2c6                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-tflp9                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-242725                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-242725             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-242725    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5kc2s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-242725             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-m2g6s              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-242725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-242725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-242725 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-242725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-242725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-242725 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-242725 event: Registered Node no-preload-242725 in Controller
	
	
	==> dmesg <==
	[  +0.046231] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.228118] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.003258] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.735585] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec12 01:04] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.060708] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055781] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.204319] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.120468] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.313452] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[ +16.117643] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.061381] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.182738] systemd-fstab-generator[1428]: Ignoring "noauto" option for root device
	[  +6.226683] kauditd_printk_skb: 100 callbacks suppressed
	[  +5.699152] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.593800] kauditd_printk_skb: 23 callbacks suppressed
	[Dec12 01:09] systemd-fstab-generator[3124]: Ignoring "noauto" option for root device
	[  +0.060978] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.029473] systemd-fstab-generator[3454]: Ignoring "noauto" option for root device
	[  +0.088838] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.796874] systemd-fstab-generator[3575]: Ignoring "noauto" option for root device
	[  +0.101045] kauditd_printk_skb: 12 callbacks suppressed
	[Dec12 01:10] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [fbdb347f38c028a5eba7a978c136524b24c4142e09d8f1fbcbe7adf6d05f6c17] <==
	{"level":"info","ts":"2024-12-12T01:09:04.023221Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-12T01:09:04.023147Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b0f93967598a482b","initial-advertise-peer-urls":["https://192.168.61.222:2380"],"listen-peer-urls":["https://192.168.61.222:2380"],"advertise-client-urls":["https://192.168.61.222:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.222:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-12T01:09:04.636258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-12T01:09:04.636368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-12T01:09:04.636476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b received MsgPreVoteResp from b0f93967598a482b at term 1"}
	{"level":"info","ts":"2024-12-12T01:09:04.636573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b became candidate at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.636600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b received MsgVoteResp from b0f93967598a482b at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.636611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0f93967598a482b became leader at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.636700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0f93967598a482b elected leader b0f93967598a482b at term 2"}
	{"level":"info","ts":"2024-12-12T01:09:04.641410Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.642097Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b0f93967598a482b","local-member-attributes":"{Name:no-preload-242725 ClientURLs:[https://192.168.61.222:2379]}","request-path":"/0/members/b0f93967598a482b/attributes","cluster-id":"d5cdccca781de8ae","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-12T01:09:04.642144Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:09:04.642699Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5cdccca781de8ae","local-member-id":"b0f93967598a482b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.642843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.642937Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-12T01:09:04.643007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-12T01:09:04.652614Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:09:04.653441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.222:2379"}
	{"level":"info","ts":"2024-12-12T01:09:04.655287Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-12T01:09:04.655372Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-12T01:09:04.658926Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-12T01:09:04.658887Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-12T01:19:04.723539Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":726}
	{"level":"info","ts":"2024-12-12T01:19:04.733843Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":726,"took":"9.529466ms","hash":853641892,"current-db-size-bytes":2363392,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2363392,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-12-12T01:19:04.733942Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":853641892,"revision":726,"compact-revision":-1}
	
	
	==> kernel <==
	 01:23:47 up 20 min,  0 users,  load average: 0.00, 0.05, 0.10
	Linux no-preload-242725 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3d7c8b9818dc2327eb14ac92d8921ff086eb91795316e7ade296bba52d7d52ef] <==
	W1212 01:08:59.782931       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.824029       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.848773       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.851410       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.871806       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:08:59.933865       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.067223       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.077818       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.115895       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.225945       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.231351       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.320335       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.338336       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.342854       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.360367       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.441033       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.529878       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.571507       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.604291       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.622786       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.657449       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.657642       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.727181       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.755589       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 01:09:00.829326       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [dd1e0c805d8006f005afae92a179b98a7d4eff2d50ed61181814c73098fa4a61] <==
	E1212 01:19:07.201528       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1212 01:19:07.201643       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:19:07.202787       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:19:07.202832       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:20:07.203728       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:20:07.204013       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1212 01:20:07.204043       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:20:07.204140       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1212 01:20:07.207862       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:20:07.207904       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1212 01:22:07.209147       1 handler_proxy.go:99] no RequestInfo found in the context
	W1212 01:22:07.209265       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 01:22:07.209602       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1212 01:22:07.209606       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1212 01:22:07.210896       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1212 01:22:07.210954       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ccee5585bfc4831a206e7456bf02037c800f9e8034eba9a091c603be45fd12d6] <==
	E1212 01:18:43.188673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:18:43.737838       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:19:13.196388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:19:13.746559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:19:29.024548       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-242725"
	E1212 01:19:43.202741       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:19:43.754198       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:20:13.209155       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:20:13.761902       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1212 01:20:17.694554       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="144.43µs"
	I1212 01:20:29.690955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="109.128µs"
	E1212 01:20:43.217896       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:20:43.771028       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:21:13.224995       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:21:13.779634       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:21:43.231928       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:21:43.789380       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:22:13.238474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:22:13.798752       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:22:43.245396       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:22:43.806869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:23:13.252618       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:23:13.815270       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1212 01:23:43.261133       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 01:23:43.825430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a94f000b280346f99b52e354af9b09cbe544c7910209856d6e6f14e02a251e5c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1212 01:09:15.212558       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1212 01:09:15.229844       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.222"]
	E1212 01:09:15.229960       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 01:09:15.409282       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1212 01:09:15.409340       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1212 01:09:15.409388       1 server_linux.go:169] "Using iptables Proxier"
	I1212 01:09:15.417948       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 01:09:15.418280       1 server.go:483] "Version info" version="v1.31.2"
	I1212 01:09:15.418293       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 01:09:15.424716       1 config.go:199] "Starting service config controller"
	I1212 01:09:15.424753       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1212 01:09:15.424775       1 config.go:105] "Starting endpoint slice config controller"
	I1212 01:09:15.424779       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1212 01:09:15.424812       1 config.go:328] "Starting node config controller"
	I1212 01:09:15.424819       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1212 01:09:15.526205       1 shared_informer.go:320] Caches are synced for node config
	I1212 01:09:15.526222       1 shared_informer.go:320] Caches are synced for service config
	I1212 01:09:15.526243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11444b2efab6903c3ca392aba14ef1fc4b899d047509af63da8254d79a96eef6] <==
	W1212 01:09:06.225418       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1212 01:09:06.225487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:06.226129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:06.225534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:06.226188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:06.225589       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1212 01:09:06.226237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1212 01:09:06.226280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.162979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1212 01:09:07.163038       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.203344       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.203458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.229335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.229393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.293461       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1212 01:09:07.294606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.388638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.389006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.398793       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1212 01:09:07.398857       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.429549       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1212 01:09:07.429622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1212 01:09:07.474912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1212 01:09:07.474955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1212 01:09:07.819606       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 12 01:22:37 no-preload-242725 kubelet[3461]: E1212 01:22:37.677051    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:22:38 no-preload-242725 kubelet[3461]: E1212 01:22:38.913599    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966558913015683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:22:38 no-preload-242725 kubelet[3461]: E1212 01:22:38.913631    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966558913015683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:22:48 no-preload-242725 kubelet[3461]: E1212 01:22:48.920688    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966568918147773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:22:48 no-preload-242725 kubelet[3461]: E1212 01:22:48.920768    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966568918147773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:22:52 no-preload-242725 kubelet[3461]: E1212 01:22:52.678251    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:22:58 no-preload-242725 kubelet[3461]: E1212 01:22:58.923442    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966578922935665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:22:58 no-preload-242725 kubelet[3461]: E1212 01:22:58.923495    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966578922935665,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:06 no-preload-242725 kubelet[3461]: E1212 01:23:06.680045    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]: E1212 01:23:08.745143    3461 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]: E1212 01:23:08.924969    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966588924515355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:08 no-preload-242725 kubelet[3461]: E1212 01:23:08.925000    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966588924515355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:18 no-preload-242725 kubelet[3461]: E1212 01:23:18.927674    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966598926905105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:18 no-preload-242725 kubelet[3461]: E1212 01:23:18.928020    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966598926905105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:21 no-preload-242725 kubelet[3461]: E1212 01:23:21.676659    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:23:28 no-preload-242725 kubelet[3461]: E1212 01:23:28.929843    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966608929558126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:28 no-preload-242725 kubelet[3461]: E1212 01:23:28.930201    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966608929558126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:33 no-preload-242725 kubelet[3461]: E1212 01:23:33.676966    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	Dec 12 01:23:38 no-preload-242725 kubelet[3461]: E1212 01:23:38.931926    3461 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966618931606291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:38 no-preload-242725 kubelet[3461]: E1212 01:23:38.931997    3461 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966618931606291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 12 01:23:45 no-preload-242725 kubelet[3461]: E1212 01:23:45.676736    3461 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-m2g6s" podUID="b0879479-4335-4782-b15a-83f22d85139e"
	
	
	==> storage-provisioner [ff7827fa37f2279a1322bd4ab221adf46c56252442de322b5c84d24b994cfcc8] <==
	I1212 01:09:16.436510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1212 01:09:16.456880       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1212 01:09:16.456993       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1212 01:09:16.465438       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1212 01:09:16.465626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-242725_d7e1b762-b572-4dd7-a67e-47acc0186cfc!
	I1212 01:09:16.467244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"58129ccb-db34-4d41-b6ab-c80c5b3f104f", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-242725_d7e1b762-b572-4dd7-a67e-47acc0186cfc became leader
	I1212 01:09:16.566693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-242725_d7e1b762-b572-4dd7-a67e-47acc0186cfc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-242725 -n no-preload-242725
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-242725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-m2g6s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-242725 describe pod metrics-server-6867b74b74-m2g6s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-242725 describe pod metrics-server-6867b74b74-m2g6s: exit status 1 (97.170696ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-m2g6s" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-242725 describe pod metrics-server-6867b74b74-m2g6s: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (324.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (131.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
E1212 01:22:46.617878   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
E1212 01:22:55.698320   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.25:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.25:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (246.820106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-738445" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-738445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-738445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.897µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-738445 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (238.888613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-738445 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-738445 logs -n 25: (1.537642043s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-000053 -- sudo                         | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-000053                                 | cert-options-000053          | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:53 UTC |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:53 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-459384                           | kubernetes-upgrade-459384    | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:54 UTC |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:54 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-242725             | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-112531                              | cert-expiration-112531       | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	| addons  | enable metrics-server -p embed-certs-607268            | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | disable-driver-mounts-535684 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:55 UTC |
	|         | disable-driver-mounts-535684                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC | 12 Dec 24 00:56 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:55 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-076578  | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC | 12 Dec 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:56 UTC |                     |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-242725                  | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-607268                 | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-738445        | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:57 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-242725                                   | no-preload-242725            | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-607268                                  | embed-certs-607268           | jenkins | v1.34.0 | 12 Dec 24 00:58 UTC | 12 Dec 24 01:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-076578       | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-076578 | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 01:08 UTC |
	|         | default-k8s-diff-port-076578                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-738445             | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC | 12 Dec 24 00:59 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-738445                              | old-k8s-version-738445       | jenkins | v1.34.0 | 12 Dec 24 00:59 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/12 00:59:45
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 00:59:45.233578  142150 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:59:45.233778  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.233807  142150 out.go:358] Setting ErrFile to fd 2...
	I1212 00:59:45.233824  142150 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:59:45.234389  142150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:59:45.235053  142150 out.go:352] Setting JSON to false
	I1212 00:59:45.235948  142150 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13327,"bootTime":1733951858,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:59:45.236050  142150 start.go:139] virtualization: kvm guest
	I1212 00:59:45.238284  142150 out.go:177] * [old-k8s-version-738445] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:59:45.239634  142150 notify.go:220] Checking for updates...
	I1212 00:59:45.239643  142150 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:59:45.240927  142150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:59:45.242159  142150 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:59:45.243348  142150 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:59:45.244426  142150 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:59:45.245620  142150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:59:45.247054  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:59:45.247412  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.247475  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.262410  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35735
	I1212 00:59:45.262838  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.263420  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.263444  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.263773  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.263944  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.265490  142150 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1212 00:59:45.266656  142150 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:59:45.266925  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:59:45.266959  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:59:45.281207  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1212 00:59:45.281596  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:59:45.281963  142150 main.go:141] libmachine: Using API Version  1
	I1212 00:59:45.281991  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:59:45.282333  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:59:45.282519  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 00:59:45.316543  142150 out.go:177] * Using the kvm2 driver based on existing profile
	I1212 00:59:45.317740  142150 start.go:297] selected driver: kvm2
	I1212 00:59:45.317754  142150 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.317960  142150 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:59:45.318921  142150 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.319030  142150 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1212 00:59:45.334276  142150 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1212 00:59:45.334744  142150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 00:59:45.334784  142150 cni.go:84] Creating CNI manager for ""
	I1212 00:59:45.334845  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 00:59:45.334901  142150 start.go:340] cluster config:
	{Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 00:59:45.335060  142150 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 00:59:45.336873  142150 out.go:177] * Starting "old-k8s-version-738445" primary control-plane node in "old-k8s-version-738445" cluster
	I1212 00:59:42.763823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:45.338030  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 00:59:45.338076  142150 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1212 00:59:45.338087  142150 cache.go:56] Caching tarball of preloaded images
	I1212 00:59:45.338174  142150 preload.go:172] Found /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1212 00:59:45.338188  142150 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1212 00:59:45.338309  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 00:59:45.338520  142150 start.go:360] acquireMachinesLock for old-k8s-version-738445: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 00:59:48.839858  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:51.911930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 00:59:57.991816  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:01.063931  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:07.143823  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:10.215896  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:16.295837  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:19.367812  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:25.447920  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:28.519965  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:34.599875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:37.671930  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:43.751927  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:46.823861  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:52.903942  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:00:55.975967  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:02.055889  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:05.127830  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:11.207862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:14.279940  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:20.359871  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:23.431885  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:29.511831  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:32.583875  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:38.663880  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:41.735869  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:47.815810  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:50.887937  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:01:56.967886  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:00.039916  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:06.119870  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:09.191917  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:15.271841  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:18.343881  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:24.423844  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:27.495936  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:33.575851  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:36.647862  141411 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.222:22: connect: no route to host
	I1212 01:02:39.652816  141469 start.go:364] duration metric: took 4m35.142362604s to acquireMachinesLock for "embed-certs-607268"
	I1212 01:02:39.652891  141469 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:39.652902  141469 fix.go:54] fixHost starting: 
	I1212 01:02:39.653292  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:39.653345  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:39.668953  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1212 01:02:39.669389  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:39.669880  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:02:39.669906  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:39.670267  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:39.670428  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:39.670550  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:02:39.671952  141469 fix.go:112] recreateIfNeeded on embed-certs-607268: state=Stopped err=<nil>
	I1212 01:02:39.671994  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	W1212 01:02:39.672154  141469 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:39.677119  141469 out.go:177] * Restarting existing kvm2 VM for "embed-certs-607268" ...
	I1212 01:02:39.650358  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:39.650413  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650700  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:02:39.650731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:02:39.650949  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:02:39.652672  141411 machine.go:96] duration metric: took 4m37.426998938s to provisionDockerMachine
	I1212 01:02:39.652723  141411 fix.go:56] duration metric: took 4m37.447585389s for fixHost
	I1212 01:02:39.652731  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 4m37.447868317s
	W1212 01:02:39.652756  141411 start.go:714] error starting host: provision: host is not running
	W1212 01:02:39.652909  141411 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1212 01:02:39.652919  141411 start.go:729] Will try again in 5 seconds ...
	I1212 01:02:39.682230  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Start
	I1212 01:02:39.682424  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring networks are active...
	I1212 01:02:39.683293  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network default is active
	I1212 01:02:39.683713  141469 main.go:141] libmachine: (embed-certs-607268) Ensuring network mk-embed-certs-607268 is active
	I1212 01:02:39.684046  141469 main.go:141] libmachine: (embed-certs-607268) Getting domain xml...
	I1212 01:02:39.684631  141469 main.go:141] libmachine: (embed-certs-607268) Creating domain...
	I1212 01:02:40.886712  141469 main.go:141] libmachine: (embed-certs-607268) Waiting to get IP...
	I1212 01:02:40.887814  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:40.888208  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:40.888304  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:40.888203  142772 retry.go:31] will retry after 273.835058ms: waiting for machine to come up
	I1212 01:02:41.164102  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.164574  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.164604  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.164545  142772 retry.go:31] will retry after 260.789248ms: waiting for machine to come up
	I1212 01:02:41.427069  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.427486  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.427513  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.427449  142772 retry.go:31] will retry after 330.511025ms: waiting for machine to come up
	I1212 01:02:41.760038  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:41.760388  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:41.760434  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:41.760337  142772 retry.go:31] will retry after 564.656792ms: waiting for machine to come up
	I1212 01:02:42.327037  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.327545  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.327567  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.327505  142772 retry.go:31] will retry after 473.714754ms: waiting for machine to come up
	I1212 01:02:42.803228  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:42.803607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:42.803639  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:42.803548  142772 retry.go:31] will retry after 872.405168ms: waiting for machine to come up
	I1212 01:02:43.677522  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:43.677891  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:43.677919  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:43.677833  142772 retry.go:31] will retry after 1.092518369s: waiting for machine to come up
	I1212 01:02:44.655390  141411 start.go:360] acquireMachinesLock for no-preload-242725: {Name:mk1fa60d266c903bd18858ee13e25d7a9c937e78 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1212 01:02:44.771319  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:44.771721  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:44.771751  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:44.771666  142772 retry.go:31] will retry after 1.147907674s: waiting for machine to come up
	I1212 01:02:45.921165  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:45.921636  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:45.921666  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:45.921589  142772 retry.go:31] will retry after 1.69009103s: waiting for machine to come up
	I1212 01:02:47.614391  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:47.614838  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:47.614863  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:47.614792  142772 retry.go:31] will retry after 1.65610672s: waiting for machine to come up
	I1212 01:02:49.272864  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:49.273312  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:49.273337  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:49.273268  142772 retry.go:31] will retry after 2.50327667s: waiting for machine to come up
	I1212 01:02:51.779671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:51.780077  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:51.780104  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:51.780016  142772 retry.go:31] will retry after 2.808303717s: waiting for machine to come up
	I1212 01:02:54.591866  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:54.592241  141469 main.go:141] libmachine: (embed-certs-607268) DBG | unable to find current IP address of domain embed-certs-607268 in network mk-embed-certs-607268
	I1212 01:02:54.592285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | I1212 01:02:54.592208  142772 retry.go:31] will retry after 3.689107313s: waiting for machine to come up
	I1212 01:02:58.282552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.282980  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has current primary IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.283005  141469 main.go:141] libmachine: (embed-certs-607268) Found IP for machine: 192.168.50.151
	I1212 01:02:58.283018  141469 main.go:141] libmachine: (embed-certs-607268) Reserving static IP address...
	I1212 01:02:58.283419  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.283441  141469 main.go:141] libmachine: (embed-certs-607268) Reserved static IP address: 192.168.50.151
	I1212 01:02:58.283453  141469 main.go:141] libmachine: (embed-certs-607268) DBG | skip adding static IP to network mk-embed-certs-607268 - found existing host DHCP lease matching {name: "embed-certs-607268", mac: "52:54:00:64:f0:cf", ip: "192.168.50.151"}
	I1212 01:02:58.283462  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Getting to WaitForSSH function...
	I1212 01:02:58.283469  141469 main.go:141] libmachine: (embed-certs-607268) Waiting for SSH to be available...
	I1212 01:02:58.285792  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286126  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.286162  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.286298  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH client type: external
	I1212 01:02:58.286330  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa (-rw-------)
	I1212 01:02:58.286378  141469 main.go:141] libmachine: (embed-certs-607268) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.151 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:02:58.286394  141469 main.go:141] libmachine: (embed-certs-607268) DBG | About to run SSH command:
	I1212 01:02:58.286403  141469 main.go:141] libmachine: (embed-certs-607268) DBG | exit 0
	I1212 01:02:58.407633  141469 main.go:141] libmachine: (embed-certs-607268) DBG | SSH cmd err, output: <nil>: 
	I1212 01:02:58.407985  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetConfigRaw
	I1212 01:02:58.408745  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.411287  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411607  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.411642  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.411920  141469 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/config.json ...
	I1212 01:02:58.412117  141469 machine.go:93] provisionDockerMachine start ...
	I1212 01:02:58.412136  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:58.412336  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.414338  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414643  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.414669  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.414765  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.414944  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415114  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.415259  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.415450  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.415712  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.415724  141469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:02:58.520032  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:02:58.520068  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520312  141469 buildroot.go:166] provisioning hostname "embed-certs-607268"
	I1212 01:02:58.520341  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.520539  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.523169  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523552  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.523584  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.523733  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.523910  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524092  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.524252  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.524411  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.524573  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.524584  141469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-607268 && echo "embed-certs-607268" | sudo tee /etc/hostname
	I1212 01:02:58.642175  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-607268
	
	I1212 01:02:58.642232  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.645114  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645480  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.645505  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.645698  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.645909  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646063  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.646192  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.646334  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:58.646513  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:58.646530  141469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-607268' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-607268/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-607268' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:02:58.758655  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:02:58.758696  141469 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:02:58.758715  141469 buildroot.go:174] setting up certificates
	I1212 01:02:58.758726  141469 provision.go:84] configureAuth start
	I1212 01:02:58.758735  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetMachineName
	I1212 01:02:58.759031  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:58.761749  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762024  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.762052  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.762165  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.764356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764671  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.764699  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.764781  141469 provision.go:143] copyHostCerts
	I1212 01:02:58.764874  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:02:58.764898  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:02:58.764986  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:02:58.765107  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:02:58.765118  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:02:58.765160  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:02:58.765235  141469 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:02:58.765245  141469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:02:58.765296  141469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:02:58.765369  141469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.embed-certs-607268 san=[127.0.0.1 192.168.50.151 embed-certs-607268 localhost minikube]
	I1212 01:02:58.890412  141469 provision.go:177] copyRemoteCerts
	I1212 01:02:58.890519  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:02:58.890560  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:58.892973  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893262  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:58.893291  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:58.893471  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:58.893647  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:58.893761  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:58.893855  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:58.973652  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:02:58.998097  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:02:59.022028  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:02:59.045859  141469 provision.go:87] duration metric: took 287.094036ms to configureAuth
	I1212 01:02:59.045892  141469 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:02:59.046119  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:02:59.046242  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.048869  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049255  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.049285  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.049465  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.049642  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049764  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.049864  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.049974  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.050181  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.050198  141469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:02:59.276670  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:02:59.276708  141469 machine.go:96] duration metric: took 864.577145ms to provisionDockerMachine
	I1212 01:02:59.276724  141469 start.go:293] postStartSetup for "embed-certs-607268" (driver="kvm2")
	I1212 01:02:59.276747  141469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:02:59.276780  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.277171  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:02:59.277207  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.279974  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280341  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.280387  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.280529  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.280738  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.280897  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.281026  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.363091  141469 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:02:59.367476  141469 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:02:59.367503  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:02:59.367618  141469 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:02:59.367749  141469 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:02:59.367844  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:02:59.377895  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:02:59.402410  141469 start.go:296] duration metric: took 125.668908ms for postStartSetup
	I1212 01:02:59.402462  141469 fix.go:56] duration metric: took 19.749561015s for fixHost
	I1212 01:02:59.402485  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.405057  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405356  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.405385  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.405624  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.405808  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.405974  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.406094  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.406237  141469 main.go:141] libmachine: Using SSH client type: native
	I1212 01:02:59.406423  141469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.50.151 22 <nil> <nil>}
	I1212 01:02:59.406436  141469 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:02:59.516697  141884 start.go:364] duration metric: took 3m42.810720852s to acquireMachinesLock for "default-k8s-diff-port-076578"
	I1212 01:02:59.516759  141884 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:02:59.516773  141884 fix.go:54] fixHost starting: 
	I1212 01:02:59.517192  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:02:59.517241  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:02:59.533969  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I1212 01:02:59.534367  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:02:59.534831  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:02:59.534854  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:02:59.535165  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:02:59.535362  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:02:59.535499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:02:59.536930  141884 fix.go:112] recreateIfNeeded on default-k8s-diff-port-076578: state=Stopped err=<nil>
	I1212 01:02:59.536951  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	W1212 01:02:59.537093  141884 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:02:59.538974  141884 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-076578" ...
	I1212 01:02:59.516496  141469 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965379.489556963
	
	I1212 01:02:59.516525  141469 fix.go:216] guest clock: 1733965379.489556963
	I1212 01:02:59.516535  141469 fix.go:229] Guest: 2024-12-12 01:02:59.489556963 +0000 UTC Remote: 2024-12-12 01:02:59.40246635 +0000 UTC m=+295.033602018 (delta=87.090613ms)
	I1212 01:02:59.516574  141469 fix.go:200] guest clock delta is within tolerance: 87.090613ms
	I1212 01:02:59.516580  141469 start.go:83] releasing machines lock for "embed-certs-607268", held for 19.863720249s
	I1212 01:02:59.516605  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.516828  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:02:59.519731  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520075  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.520111  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.520202  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520752  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.520921  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:02:59.521064  141469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:02:59.521131  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.521155  141469 ssh_runner.go:195] Run: cat /version.json
	I1212 01:02:59.521171  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:02:59.523724  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.523971  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524036  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524063  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524221  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524374  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:02:59.524375  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524401  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:02:59.524553  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.524562  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:02:59.524719  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:02:59.524719  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.524837  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:02:59.525000  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:02:59.628168  141469 ssh_runner.go:195] Run: systemctl --version
	I1212 01:02:59.635800  141469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:02:59.788137  141469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:02:59.795216  141469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:02:59.795289  141469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:02:59.811889  141469 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:02:59.811917  141469 start.go:495] detecting cgroup driver to use...
	I1212 01:02:59.811992  141469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:02:59.827154  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:02:59.841138  141469 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:02:59.841193  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:02:59.854874  141469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:02:59.869250  141469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:02:59.994723  141469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:00.136385  141469 docker.go:233] disabling docker service ...
	I1212 01:03:00.136462  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:00.150966  141469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:00.163907  141469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:00.340171  141469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:00.480828  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:00.498056  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:00.518273  141469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:00.518339  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.529504  141469 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:00.529571  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.540616  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.553419  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.566004  141469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:00.577682  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.589329  141469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.612561  141469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:00.625526  141469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:00.635229  141469 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:00.635289  141469 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:00.657569  141469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:00.669982  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:00.793307  141469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:00.887423  141469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:00.887498  141469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:00.892715  141469 start.go:563] Will wait 60s for crictl version
	I1212 01:03:00.892773  141469 ssh_runner.go:195] Run: which crictl
	I1212 01:03:00.896646  141469 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:00.933507  141469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:00.933606  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:00.977011  141469 ssh_runner.go:195] Run: crio --version
	I1212 01:03:01.008491  141469 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:02:59.540301  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Start
	I1212 01:02:59.540482  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring networks are active...
	I1212 01:02:59.541181  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network default is active
	I1212 01:02:59.541503  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Ensuring network mk-default-k8s-diff-port-076578 is active
	I1212 01:02:59.541802  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Getting domain xml...
	I1212 01:02:59.542437  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Creating domain...
	I1212 01:03:00.796803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting to get IP...
	I1212 01:03:00.797932  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798386  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.798495  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.798404  142917 retry.go:31] will retry after 199.022306ms: waiting for machine to come up
	I1212 01:03:00.999067  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999547  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:00.999572  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:00.999499  142917 retry.go:31] will retry after 340.093067ms: waiting for machine to come up
	I1212 01:03:01.340839  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.341513  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.341437  142917 retry.go:31] will retry after 469.781704ms: waiting for machine to come up
	I1212 01:03:01.009956  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetIP
	I1212 01:03:01.012767  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013224  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:03:01.013252  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:03:01.013471  141469 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:01.017815  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:01.032520  141469 kubeadm.go:883] updating cluster {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:01.032662  141469 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:01.032715  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:01.070406  141469 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:01.070478  141469 ssh_runner.go:195] Run: which lz4
	I1212 01:03:01.074840  141469 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:01.079207  141469 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:01.079238  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:02.524822  141469 crio.go:462] duration metric: took 1.450020274s to copy over tarball
	I1212 01:03:02.524909  141469 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:01.812803  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813298  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:01.813335  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:01.813232  142917 retry.go:31] will retry after 552.327376ms: waiting for machine to come up
	I1212 01:03:02.367682  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368152  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:02.368187  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:02.368106  142917 retry.go:31] will retry after 629.731283ms: waiting for machine to come up
	I1212 01:03:02.999887  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000307  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.000339  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.000235  142917 retry.go:31] will retry after 764.700679ms: waiting for machine to come up
	I1212 01:03:03.766389  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766891  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:03.766919  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:03.766845  142917 retry.go:31] will retry after 920.806371ms: waiting for machine to come up
	I1212 01:03:04.689480  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690029  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:04.690087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:04.689996  142917 retry.go:31] will retry after 1.194297967s: waiting for machine to come up
	I1212 01:03:05.886345  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886729  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:05.886796  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:05.886714  142917 retry.go:31] will retry after 1.60985804s: waiting for machine to come up
	I1212 01:03:04.719665  141469 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.194717299s)
	I1212 01:03:04.719708  141469 crio.go:469] duration metric: took 2.194851225s to extract the tarball
	I1212 01:03:04.719719  141469 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:04.756600  141469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:04.802801  141469 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:04.802832  141469 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:04.802840  141469 kubeadm.go:934] updating node { 192.168.50.151 8443 v1.31.2 crio true true} ...
	I1212 01:03:04.802949  141469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-607268 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:04.803008  141469 ssh_runner.go:195] Run: crio config
	I1212 01:03:04.854778  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:04.854804  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:04.854815  141469 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:04.854836  141469 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.151 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-607268 NodeName:embed-certs-607268 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.151"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.151 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:04.854962  141469 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.151
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-607268"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.151"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.151"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:04.855023  141469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:04.864877  141469 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:04.864967  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:04.874503  141469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1212 01:03:04.891124  141469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:04.907560  141469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1212 01:03:04.924434  141469 ssh_runner.go:195] Run: grep 192.168.50.151	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:04.928518  141469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.151	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:04.940523  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:05.076750  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:05.094388  141469 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268 for IP: 192.168.50.151
	I1212 01:03:05.094424  141469 certs.go:194] generating shared ca certs ...
	I1212 01:03:05.094440  141469 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:05.094618  141469 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:05.094691  141469 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:05.094710  141469 certs.go:256] generating profile certs ...
	I1212 01:03:05.094833  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/client.key
	I1212 01:03:05.094916  141469 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key.9253237b
	I1212 01:03:05.094968  141469 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key
	I1212 01:03:05.095131  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:05.095177  141469 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:05.095192  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:05.095224  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:05.095254  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:05.095293  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:05.095359  141469 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:05.096133  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:05.130605  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:05.164694  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:05.206597  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:05.241305  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1212 01:03:05.270288  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:05.296137  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:05.320158  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/embed-certs-607268/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:05.343820  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:05.373277  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:05.397070  141469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:05.420738  141469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:05.437822  141469 ssh_runner.go:195] Run: openssl version
	I1212 01:03:05.443744  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:05.454523  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459182  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.459237  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:05.465098  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:05.475681  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:05.486396  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490883  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.490929  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:05.496613  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:05.507295  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:05.517980  141469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522534  141469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.522590  141469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:05.528117  141469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:05.538979  141469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:05.543723  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:05.549611  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:05.555445  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:05.561482  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:05.567221  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:05.573015  141469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:05.578902  141469 kubeadm.go:392] StartCluster: {Name:embed-certs-607268 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-607268 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:05.578984  141469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:05.579042  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.619027  141469 cri.go:89] found id: ""
	I1212 01:03:05.619115  141469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:05.629472  141469 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:05.629501  141469 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:05.629567  141469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:05.639516  141469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:05.640491  141469 kubeconfig.go:125] found "embed-certs-607268" server: "https://192.168.50.151:8443"
	I1212 01:03:05.642468  141469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:05.651891  141469 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.151
	I1212 01:03:05.651922  141469 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:05.651934  141469 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:05.651978  141469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:05.686414  141469 cri.go:89] found id: ""
	I1212 01:03:05.686501  141469 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:05.702724  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:05.712454  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:05.712480  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:05.712531  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:05.721529  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:05.721603  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:05.730897  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:05.739758  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:05.739815  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:05.749089  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.758042  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:05.758104  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:05.767425  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:05.776195  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:05.776260  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:05.785435  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:05.794795  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:05.918710  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:06.846975  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.072898  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.139677  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:07.237216  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:07.237336  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:07.738145  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.238219  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:08.255671  141469 api_server.go:72] duration metric: took 1.018455783s to wait for apiserver process to appear ...
	I1212 01:03:08.255705  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:08.255734  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:08.256408  141469 api_server.go:269] stopped: https://192.168.50.151:8443/healthz: Get "https://192.168.50.151:8443/healthz": dial tcp 192.168.50.151:8443: connect: connection refused
	I1212 01:03:08.756070  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:07.498527  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498942  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:07.498966  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:07.498889  142917 retry.go:31] will retry after 2.278929136s: waiting for machine to come up
	I1212 01:03:09.779321  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779847  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:09.779879  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:09.779793  142917 retry.go:31] will retry after 1.82028305s: waiting for machine to come up
	I1212 01:03:10.630080  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.630121  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.630140  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.674408  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:10.674470  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:10.756660  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:10.763043  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:10.763088  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.256254  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.263457  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.263481  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:11.756759  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:11.764019  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:11.764053  141469 api_server.go:103] status: https://192.168.50.151:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:12.256627  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:03:12.262369  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:03:12.270119  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:12.270153  141469 api_server.go:131] duration metric: took 4.014438706s to wait for apiserver health ...
	I1212 01:03:12.270164  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:03:12.270172  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:12.272148  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:12.273667  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:12.289376  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:12.312620  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:12.323663  141469 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:12.323715  141469 system_pods.go:61] "coredns-7c65d6cfc9-n66x6" [ae2c1ac7-0c17-453d-a05c-70fbf6d81e1b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:12.323727  141469 system_pods.go:61] "etcd-embed-certs-607268" [811dc3d0-d893-45a0-a5c7-3fee0efd2e08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:12.323747  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [11848f2c-215b-4cf4-88f0-93151c55e7c5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:12.323764  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [4f4066ab-b6e4-4a46-a03b-dda1776c39ca] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:12.323776  141469 system_pods.go:61] "kube-proxy-9f6lj" [2463030a-d7db-4031-9e26-0a56a9067520] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:12.323784  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [c2aeaf4a-7fb8-4bb8-87ea-5401db017fe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:12.323795  141469 system_pods.go:61] "metrics-server-6867b74b74-5bms9" [e1a794f9-cf60-486f-a0e8-670dc7dfb4da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:12.323803  141469 system_pods.go:61] "storage-provisioner" [b29860cd-465d-4e70-ad5d-dd17c22ae290] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:12.323820  141469 system_pods.go:74] duration metric: took 11.170811ms to wait for pod list to return data ...
	I1212 01:03:12.323845  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:12.327828  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:12.327863  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:12.327880  141469 node_conditions.go:105] duration metric: took 4.029256ms to run NodePressure ...
	I1212 01:03:12.327902  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:12.638709  141469 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644309  141469 kubeadm.go:739] kubelet initialised
	I1212 01:03:12.644332  141469 kubeadm.go:740] duration metric: took 5.590168ms waiting for restarted kubelet to initialise ...
	I1212 01:03:12.644356  141469 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:12.650768  141469 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:11.601456  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602012  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:11.602044  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:11.601956  142917 retry.go:31] will retry after 2.272258384s: waiting for machine to come up
	I1212 01:03:13.876607  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.876986  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | unable to find current IP address of domain default-k8s-diff-port-076578 in network mk-default-k8s-diff-port-076578
	I1212 01:03:13.877024  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | I1212 01:03:13.876950  142917 retry.go:31] will retry after 4.014936005s: waiting for machine to come up
	I1212 01:03:19.148724  142150 start.go:364] duration metric: took 3m33.810164292s to acquireMachinesLock for "old-k8s-version-738445"
	I1212 01:03:19.148804  142150 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:19.148816  142150 fix.go:54] fixHost starting: 
	I1212 01:03:19.149247  142150 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:19.149331  142150 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:19.167749  142150 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34263
	I1212 01:03:19.168331  142150 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:19.168873  142150 main.go:141] libmachine: Using API Version  1
	I1212 01:03:19.168906  142150 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:19.169286  142150 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:19.169500  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:19.169655  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetState
	I1212 01:03:19.171285  142150 fix.go:112] recreateIfNeeded on old-k8s-version-738445: state=Stopped err=<nil>
	I1212 01:03:19.171323  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	W1212 01:03:19.171470  142150 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:19.174413  142150 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-738445" ...
	I1212 01:03:14.657097  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:16.658207  141469 pod_ready.go:103] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:17.657933  141469 pod_ready.go:93] pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:17.657957  141469 pod_ready.go:82] duration metric: took 5.007165494s for pod "coredns-7c65d6cfc9-n66x6" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:17.657966  141469 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:19.175763  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .Start
	I1212 01:03:19.175946  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring networks are active...
	I1212 01:03:19.176721  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network default is active
	I1212 01:03:19.177067  142150 main.go:141] libmachine: (old-k8s-version-738445) Ensuring network mk-old-k8s-version-738445 is active
	I1212 01:03:19.177512  142150 main.go:141] libmachine: (old-k8s-version-738445) Getting domain xml...
	I1212 01:03:19.178281  142150 main.go:141] libmachine: (old-k8s-version-738445) Creating domain...
	I1212 01:03:17.896127  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has current primary IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.896639  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Found IP for machine: 192.168.39.174
	I1212 01:03:17.896659  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserving static IP address...
	I1212 01:03:17.897028  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.897062  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Reserved static IP address: 192.168.39.174
	I1212 01:03:17.897087  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | skip adding static IP to network mk-default-k8s-diff-port-076578 - found existing host DHCP lease matching {name: "default-k8s-diff-port-076578", mac: "52:54:00:4f:0c:23", ip: "192.168.39.174"}
	I1212 01:03:17.897108  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Getting to WaitForSSH function...
	I1212 01:03:17.897126  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Waiting for SSH to be available...
	I1212 01:03:17.899355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899727  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:17.899754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:17.899911  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH client type: external
	I1212 01:03:17.899941  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa (-rw-------)
	I1212 01:03:17.899976  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:17.899989  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | About to run SSH command:
	I1212 01:03:17.900005  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | exit 0
	I1212 01:03:18.036261  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:18.036610  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetConfigRaw
	I1212 01:03:18.037352  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.040173  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040570  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.040595  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.040866  141884 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/config.json ...
	I1212 01:03:18.041107  141884 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:18.041134  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.041355  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.043609  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.043945  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.043973  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.044142  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.044291  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044466  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.044574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.044745  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.044986  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.045002  141884 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:18.156161  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:18.156193  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156472  141884 buildroot.go:166] provisioning hostname "default-k8s-diff-port-076578"
	I1212 01:03:18.156499  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.156691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.159391  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.159871  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.159903  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.160048  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.160244  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160379  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.160500  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.160681  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.160898  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.160917  141884 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-076578 && echo "default-k8s-diff-port-076578" | sudo tee /etc/hostname
	I1212 01:03:18.285904  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-076578
	
	I1212 01:03:18.285937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.288620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.288987  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.289010  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.289285  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.289491  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289658  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.289799  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.289981  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.290190  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.290223  141884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-076578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-076578/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-076578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:18.409683  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:18.409721  141884 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:18.409751  141884 buildroot.go:174] setting up certificates
	I1212 01:03:18.409761  141884 provision.go:84] configureAuth start
	I1212 01:03:18.409782  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetMachineName
	I1212 01:03:18.410045  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:18.412393  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412721  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.412756  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.412882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.415204  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415502  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.415530  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.415663  141884 provision.go:143] copyHostCerts
	I1212 01:03:18.415735  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:18.415757  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:18.415832  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:18.415925  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:18.415933  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:18.415952  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:18.416007  141884 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:18.416015  141884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:18.416032  141884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:18.416081  141884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-076578 san=[127.0.0.1 192.168.39.174 default-k8s-diff-port-076578 localhost minikube]
	I1212 01:03:18.502493  141884 provision.go:177] copyRemoteCerts
	I1212 01:03:18.502562  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:18.502594  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.505104  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505377  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.505409  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.505568  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.505754  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.505892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.506034  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.590425  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:18.616850  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1212 01:03:18.640168  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:18.664517  141884 provision.go:87] duration metric: took 254.738256ms to configureAuth
	I1212 01:03:18.664542  141884 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:18.664705  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:03:18.664778  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.667425  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.667784  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.667808  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.668004  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.668178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668313  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.668448  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.668587  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:18.668751  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:18.668772  141884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:18.906880  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:18.906908  141884 machine.go:96] duration metric: took 865.784426ms to provisionDockerMachine
	I1212 01:03:18.906920  141884 start.go:293] postStartSetup for "default-k8s-diff-port-076578" (driver="kvm2")
	I1212 01:03:18.906931  141884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:18.906949  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:18.907315  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:18.907348  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:18.909882  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910213  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:18.910242  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:18.910347  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:18.910542  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:18.910680  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:18.910806  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:18.994819  141884 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:18.998959  141884 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:18.998989  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:18.999069  141884 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:18.999163  141884 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:18.999252  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:19.009226  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:19.032912  141884 start.go:296] duration metric: took 125.973128ms for postStartSetup
	I1212 01:03:19.032960  141884 fix.go:56] duration metric: took 19.516187722s for fixHost
	I1212 01:03:19.032990  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.035623  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.035947  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.035977  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.036151  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.036310  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036438  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.036581  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.036738  141884 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:19.036906  141884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I1212 01:03:19.036919  141884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:19.148565  141884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965399.101726035
	
	I1212 01:03:19.148592  141884 fix.go:216] guest clock: 1733965399.101726035
	I1212 01:03:19.148602  141884 fix.go:229] Guest: 2024-12-12 01:03:19.101726035 +0000 UTC Remote: 2024-12-12 01:03:19.032967067 +0000 UTC m=+242.472137495 (delta=68.758968ms)
	I1212 01:03:19.148628  141884 fix.go:200] guest clock delta is within tolerance: 68.758968ms
	I1212 01:03:19.148635  141884 start.go:83] releasing machines lock for "default-k8s-diff-port-076578", held for 19.631903968s
	I1212 01:03:19.148688  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.149016  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:19.151497  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.151926  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.151954  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.152124  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152598  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152762  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:03:19.152834  141884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:19.152892  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.152952  141884 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:19.152972  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:03:19.155620  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155694  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.155937  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.155962  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156057  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:19.156114  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156123  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:19.156316  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:03:19.156327  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156469  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:03:19.156583  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156619  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:03:19.156826  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.156824  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:03:19.268001  141884 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:19.275696  141884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:19.426624  141884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:19.432842  141884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:19.432911  141884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:19.449082  141884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:19.449108  141884 start.go:495] detecting cgroup driver to use...
	I1212 01:03:19.449187  141884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:19.466543  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:19.482668  141884 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:19.482733  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:19.497124  141884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:19.512626  141884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:19.624948  141884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:19.779469  141884 docker.go:233] disabling docker service ...
	I1212 01:03:19.779545  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:19.794888  141884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:19.810497  141884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:19.954827  141884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:20.086435  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:20.100917  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:20.120623  141884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:03:20.120683  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.134353  141884 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:20.134431  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.150373  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.165933  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.181524  141884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:20.196891  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.209752  141884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.228990  141884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:20.241553  141884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:20.251819  141884 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:20.251883  141884 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:20.267155  141884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:20.277683  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:20.427608  141884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:20.525699  141884 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:20.525804  141884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:20.530984  141884 start.go:563] Will wait 60s for crictl version
	I1212 01:03:20.531055  141884 ssh_runner.go:195] Run: which crictl
	I1212 01:03:20.535013  141884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:20.576177  141884 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:20.576251  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.605529  141884 ssh_runner.go:195] Run: crio --version
	I1212 01:03:20.638175  141884 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:20.639475  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetIP
	I1212 01:03:20.642566  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643001  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:03:20.643034  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:03:20.643196  141884 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:20.647715  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:20.662215  141884 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:20.662337  141884 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:03:20.662381  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:20.705014  141884 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:03:20.705112  141884 ssh_runner.go:195] Run: which lz4
	I1212 01:03:20.709477  141884 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:20.714111  141884 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:20.714145  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1212 01:03:19.666527  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:21.666676  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:24.165316  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:20.457742  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting to get IP...
	I1212 01:03:20.458818  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.459318  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.459384  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.459280  143077 retry.go:31] will retry after 312.060355ms: waiting for machine to come up
	I1212 01:03:20.772778  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:20.773842  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:20.773876  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:20.773802  143077 retry.go:31] will retry after 381.023448ms: waiting for machine to come up
	I1212 01:03:21.156449  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.156985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.157017  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.156943  143077 retry.go:31] will retry after 395.528873ms: waiting for machine to come up
	I1212 01:03:21.554397  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:21.554873  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:21.554905  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:21.554833  143077 retry.go:31] will retry after 542.808989ms: waiting for machine to come up
	I1212 01:03:22.099791  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.100330  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.100360  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.100301  143077 retry.go:31] will retry after 627.111518ms: waiting for machine to come up
	I1212 01:03:22.728727  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:22.729219  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:22.729244  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:22.729167  143077 retry.go:31] will retry after 649.039654ms: waiting for machine to come up
	I1212 01:03:23.379498  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:23.379935  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:23.379968  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:23.379864  143077 retry.go:31] will retry after 1.057286952s: waiting for machine to come up
	I1212 01:03:24.438408  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:24.438821  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:24.438849  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:24.438774  143077 retry.go:31] will retry after 912.755322ms: waiting for machine to come up
	I1212 01:03:22.285157  141884 crio.go:462] duration metric: took 1.575709911s to copy over tarball
	I1212 01:03:22.285258  141884 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:24.495814  141884 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.210502234s)
	I1212 01:03:24.495848  141884 crio.go:469] duration metric: took 2.210655432s to extract the tarball
	I1212 01:03:24.495857  141884 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:24.533396  141884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:24.581392  141884 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 01:03:24.581419  141884 cache_images.go:84] Images are preloaded, skipping loading
	I1212 01:03:24.581428  141884 kubeadm.go:934] updating node { 192.168.39.174 8444 v1.31.2 crio true true} ...
	I1212 01:03:24.581524  141884 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-076578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:24.581594  141884 ssh_runner.go:195] Run: crio config
	I1212 01:03:24.625042  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:24.625073  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:24.625083  141884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:24.625111  141884 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-076578 NodeName:default-k8s-diff-port-076578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:03:24.625238  141884 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-076578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:24.625313  141884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:03:24.635818  141884 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:24.635903  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:24.645966  141884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1212 01:03:24.665547  141884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:24.682639  141884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1212 01:03:24.700147  141884 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:24.704172  141884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:24.716697  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:24.842374  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:24.860641  141884 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578 for IP: 192.168.39.174
	I1212 01:03:24.860676  141884 certs.go:194] generating shared ca certs ...
	I1212 01:03:24.860700  141884 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:24.860888  141884 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:24.860955  141884 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:24.860970  141884 certs.go:256] generating profile certs ...
	I1212 01:03:24.861110  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.key
	I1212 01:03:24.861200  141884 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key.4a68806a
	I1212 01:03:24.861251  141884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key
	I1212 01:03:24.861391  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:24.861444  141884 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:24.861458  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:24.861498  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:24.861535  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:24.861565  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:24.861629  141884 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:24.862588  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:24.899764  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:24.950373  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:24.983222  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:25.017208  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 01:03:25.042653  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:03:25.071358  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:25.097200  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1212 01:03:25.122209  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:25.150544  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:25.181427  141884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:25.210857  141884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:25.229580  141884 ssh_runner.go:195] Run: openssl version
	I1212 01:03:25.236346  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:25.247510  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252355  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.252407  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:25.258511  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:25.272698  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:25.289098  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295737  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.295806  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:25.304133  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:25.315805  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:25.328327  141884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333482  141884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.333539  141884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:25.339367  141884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:25.351612  141884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:25.357060  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:25.363452  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:25.369984  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:25.376434  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:25.382895  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:25.389199  141884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:25.395232  141884 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-076578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-076578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:25.395325  141884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:25.395370  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.439669  141884 cri.go:89] found id: ""
	I1212 01:03:25.439749  141884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:25.453870  141884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:25.453893  141884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:25.453951  141884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:25.464552  141884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:25.465609  141884 kubeconfig.go:125] found "default-k8s-diff-port-076578" server: "https://192.168.39.174:8444"
	I1212 01:03:25.467767  141884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:25.477907  141884 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.174
	I1212 01:03:25.477943  141884 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:25.477958  141884 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:25.478018  141884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:25.521891  141884 cri.go:89] found id: ""
	I1212 01:03:25.521978  141884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:25.539029  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:25.549261  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:25.549283  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:25.549341  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:03:25.558948  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:25.559022  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:25.568947  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:03:25.579509  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:25.579614  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:25.589573  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.600434  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:25.600498  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:25.610337  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:03:25.619956  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:25.620014  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:25.631231  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:25.641366  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:25.761159  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:26.165525  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:28.168457  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.168492  141469 pod_ready.go:82] duration metric: took 10.510517291s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.168506  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175334  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.175361  141469 pod_ready.go:82] duration metric: took 6.84531ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.175375  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183060  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.183093  141469 pod_ready.go:82] duration metric: took 7.709158ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.183106  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.190999  141469 pod_ready.go:93] pod "kube-proxy-9f6lj" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.191028  141469 pod_ready.go:82] duration metric: took 7.913069ms for pod "kube-proxy-9f6lj" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.191040  141469 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199945  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:28.199972  141469 pod_ready.go:82] duration metric: took 8.923682ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:28.199984  141469 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:25.352682  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:25.353126  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:25.353154  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:25.353073  143077 retry.go:31] will retry after 1.136505266s: waiting for machine to come up
	I1212 01:03:26.491444  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:26.491927  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:26.491955  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:26.491868  143077 retry.go:31] will retry after 1.467959561s: waiting for machine to come up
	I1212 01:03:27.961709  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:27.962220  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:27.962255  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:27.962169  143077 retry.go:31] will retry after 2.70831008s: waiting for machine to come up
	I1212 01:03:26.830271  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069070962s)
	I1212 01:03:26.830326  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.035935  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.113317  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:27.210226  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:27.210329  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:27.710504  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.211114  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:28.242967  141884 api_server.go:72] duration metric: took 1.032736901s to wait for apiserver process to appear ...
	I1212 01:03:28.243012  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:03:28.243038  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:28.243643  141884 api_server.go:269] stopped: https://192.168.39.174:8444/healthz: Get "https://192.168.39.174:8444/healthz": dial tcp 192.168.39.174:8444: connect: connection refused
	I1212 01:03:28.743921  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.546075  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.546113  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.546129  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.621583  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:03:31.621619  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:03:31.743860  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:31.750006  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:31.750052  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.243382  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.269990  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.270033  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:32.743516  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:32.752979  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:03:32.753012  141884 api_server.go:103] status: https://192.168.39.174:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:03:33.243571  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:03:33.247902  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:03:33.253786  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:03:33.253810  141884 api_server.go:131] duration metric: took 5.010790107s to wait for apiserver health ...
	I1212 01:03:33.253820  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:03:33.253826  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:33.255762  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:03:30.208396  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:32.708024  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:30.671930  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:30.672414  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:30.672442  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:30.672366  143077 retry.go:31] will retry after 2.799706675s: waiting for machine to come up
	I1212 01:03:33.474261  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:33.474816  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | unable to find current IP address of domain old-k8s-version-738445 in network mk-old-k8s-version-738445
	I1212 01:03:33.474851  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | I1212 01:03:33.474758  143077 retry.go:31] will retry after 4.339389188s: waiting for machine to come up
	I1212 01:03:33.257007  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:03:33.267934  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:03:33.286197  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:03:33.297934  141884 system_pods.go:59] 8 kube-system pods found
	I1212 01:03:33.297982  141884 system_pods.go:61] "coredns-7c65d6cfc9-xn886" [db1f42f1-93d9-4942-813d-e3de1cc24801] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:03:33.297995  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [25555578-8169-4986-aa10-06a442152c50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:03:33.298006  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [1004c64c-91ca-43c3-9c3d-43dab13d3812] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:03:33.298023  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [63d42313-4ea9-44f9-a8eb-b0c6c73424c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:03:33.298039  141884 system_pods.go:61] "kube-proxy-7frgh" [191ed421-4297-47c7-a46d-407a8eaa0378] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:03:33.298049  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [1506a505-697c-4b80-b7ef-55de1116fa14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:03:33.298060  141884 system_pods.go:61] "metrics-server-6867b74b74-k9s7n" [806badc0-b609-421f-9203-3fd91212a145] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:03:33.298077  141884 system_pods.go:61] "storage-provisioner" [bc133673-b7e2-42b2-98ac-e3284c9162ad] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:03:33.298090  141884 system_pods.go:74] duration metric: took 11.875762ms to wait for pod list to return data ...
	I1212 01:03:33.298105  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:03:33.302482  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:03:33.302517  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:03:33.302532  141884 node_conditions.go:105] duration metric: took 4.418219ms to run NodePressure ...
	I1212 01:03:33.302555  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:33.728028  141884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735780  141884 kubeadm.go:739] kubelet initialised
	I1212 01:03:33.735810  141884 kubeadm.go:740] duration metric: took 7.738781ms waiting for restarted kubelet to initialise ...
	I1212 01:03:33.735824  141884 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:03:33.743413  141884 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:35.750012  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.348909  141411 start.go:364] duration metric: took 54.693436928s to acquireMachinesLock for "no-preload-242725"
	I1212 01:03:39.348976  141411 start.go:96] Skipping create...Using existing machine configuration
	I1212 01:03:39.348990  141411 fix.go:54] fixHost starting: 
	I1212 01:03:39.349442  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:03:39.349485  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:03:39.367203  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40493
	I1212 01:03:39.367584  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:03:39.368158  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:03:39.368185  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:03:39.368540  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:03:39.368717  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:39.368854  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:03:39.370433  141411 fix.go:112] recreateIfNeeded on no-preload-242725: state=Stopped err=<nil>
	I1212 01:03:39.370460  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	W1212 01:03:39.370594  141411 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 01:03:39.372621  141411 out.go:177] * Restarting existing kvm2 VM for "no-preload-242725" ...
	I1212 01:03:35.206417  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.208384  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:37.818233  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818777  142150 main.go:141] libmachine: (old-k8s-version-738445) Found IP for machine: 192.168.72.25
	I1212 01:03:37.818808  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has current primary IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.818818  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserving static IP address...
	I1212 01:03:37.819321  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.819376  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | skip adding static IP to network mk-old-k8s-version-738445 - found existing host DHCP lease matching {name: "old-k8s-version-738445", mac: "52:54:00:00:e1:06", ip: "192.168.72.25"}
	I1212 01:03:37.819390  142150 main.go:141] libmachine: (old-k8s-version-738445) Reserved static IP address: 192.168.72.25
	I1212 01:03:37.819412  142150 main.go:141] libmachine: (old-k8s-version-738445) Waiting for SSH to be available...
	I1212 01:03:37.819428  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Getting to WaitForSSH function...
	I1212 01:03:37.821654  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822057  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.822084  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.822234  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH client type: external
	I1212 01:03:37.822265  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa (-rw-------)
	I1212 01:03:37.822311  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.25 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:37.822325  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | About to run SSH command:
	I1212 01:03:37.822346  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | exit 0
	I1212 01:03:37.951989  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:37.952380  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetConfigRaw
	I1212 01:03:37.953037  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:37.955447  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.955770  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.955801  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.956073  142150 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/config.json ...
	I1212 01:03:37.956261  142150 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:37.956281  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:37.956490  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:37.958938  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959225  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:37.959262  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:37.959406  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:37.959569  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959749  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:37.959912  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:37.960101  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:37.960348  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:37.960364  142150 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:38.076202  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:38.076231  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076484  142150 buildroot.go:166] provisioning hostname "old-k8s-version-738445"
	I1212 01:03:38.076506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.076678  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.079316  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079689  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.079717  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.079853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.080047  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080178  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.080313  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.080481  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.080693  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.080708  142150 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-738445 && echo "old-k8s-version-738445" | sudo tee /etc/hostname
	I1212 01:03:38.212896  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-738445
	
	I1212 01:03:38.212934  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.215879  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216314  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.216353  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.216568  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.216792  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.216980  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.217138  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.217321  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.217556  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.217574  142150 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-738445' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-738445/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-738445' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:38.341064  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:38.341103  142150 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:38.341148  142150 buildroot.go:174] setting up certificates
	I1212 01:03:38.341167  142150 provision.go:84] configureAuth start
	I1212 01:03:38.341182  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetMachineName
	I1212 01:03:38.341471  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:38.343939  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344355  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.344385  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.344506  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.346597  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.346910  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.346960  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.347103  142150 provision.go:143] copyHostCerts
	I1212 01:03:38.347168  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:38.347188  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:38.347247  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:38.347363  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:38.347373  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:38.347397  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:38.347450  142150 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:38.347457  142150 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:38.347476  142150 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:38.347523  142150 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-738445 san=[127.0.0.1 192.168.72.25 localhost minikube old-k8s-version-738445]
	I1212 01:03:38.675149  142150 provision.go:177] copyRemoteCerts
	I1212 01:03:38.675217  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:38.675251  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.678239  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678639  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.678677  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.678853  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.679049  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.679174  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.679294  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:38.770527  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:03:38.797696  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:03:38.822454  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1212 01:03:38.847111  142150 provision.go:87] duration metric: took 505.925391ms to configureAuth
	I1212 01:03:38.847145  142150 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:03:38.847366  142150 config.go:182] Loaded profile config "old-k8s-version-738445": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 01:03:38.847459  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:38.850243  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850594  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:38.850621  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:38.850779  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:38.850981  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851153  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:38.851340  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:38.851581  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:38.851786  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:38.851803  142150 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:03:39.093404  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:03:39.093440  142150 machine.go:96] duration metric: took 1.137164233s to provisionDockerMachine
	I1212 01:03:39.093457  142150 start.go:293] postStartSetup for "old-k8s-version-738445" (driver="kvm2")
	I1212 01:03:39.093474  142150 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:03:39.093516  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.093848  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:03:39.093891  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.096719  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097117  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.097151  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.097305  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.097497  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.097650  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.097773  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.186726  142150 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:03:39.191223  142150 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:03:39.191249  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:03:39.191337  142150 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:03:39.191438  142150 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:03:39.191557  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:03:39.201460  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:39.229101  142150 start.go:296] duration metric: took 135.624628ms for postStartSetup
	I1212 01:03:39.229146  142150 fix.go:56] duration metric: took 20.080331642s for fixHost
	I1212 01:03:39.229168  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.231985  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232443  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.232479  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.232702  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.232913  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233076  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.233213  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.233368  142150 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:39.233632  142150 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.72.25 22 <nil> <nil>}
	I1212 01:03:39.233649  142150 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:03:39.348721  142150 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965419.319505647
	
	I1212 01:03:39.348749  142150 fix.go:216] guest clock: 1733965419.319505647
	I1212 01:03:39.348761  142150 fix.go:229] Guest: 2024-12-12 01:03:39.319505647 +0000 UTC Remote: 2024-12-12 01:03:39.229149912 +0000 UTC m=+234.032647876 (delta=90.355735ms)
	I1212 01:03:39.348787  142150 fix.go:200] guest clock delta is within tolerance: 90.355735ms
	I1212 01:03:39.348796  142150 start.go:83] releasing machines lock for "old-k8s-version-738445", held for 20.20001796s
	I1212 01:03:39.348829  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.349099  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:39.352088  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352481  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.352510  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.352667  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353244  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353428  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .DriverName
	I1212 01:03:39.353528  142150 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:03:39.353575  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.353645  142150 ssh_runner.go:195] Run: cat /version.json
	I1212 01:03:39.353674  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHHostname
	I1212 01:03:39.356260  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356614  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.356644  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356675  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.356908  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357112  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:39.357172  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:39.357293  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357375  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHPort
	I1212 01:03:39.357438  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.357514  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHKeyPath
	I1212 01:03:39.357652  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetSSHUsername
	I1212 01:03:39.357765  142150 sshutil.go:53] new ssh client: &{IP:192.168.72.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/old-k8s-version-738445/id_rsa Username:docker}
	I1212 01:03:39.441961  142150 ssh_runner.go:195] Run: systemctl --version
	I1212 01:03:39.478428  142150 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:03:39.631428  142150 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:03:39.637870  142150 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:03:39.637958  142150 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:03:39.655923  142150 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:03:39.655951  142150 start.go:495] detecting cgroup driver to use...
	I1212 01:03:39.656042  142150 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:03:39.676895  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:03:39.692966  142150 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:03:39.693048  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:03:39.710244  142150 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:03:39.725830  142150 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:03:39.848998  142150 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:03:40.014388  142150 docker.go:233] disabling docker service ...
	I1212 01:03:40.014458  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:03:40.035579  142150 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:03:40.052188  142150 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:03:40.184958  142150 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:03:40.332719  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:03:40.349338  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:03:40.371164  142150 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1212 01:03:40.371232  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.382363  142150 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:03:40.382437  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.393175  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.404397  142150 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:03:40.417867  142150 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:03:40.432988  142150 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:03:40.447070  142150 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:03:40.447145  142150 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:03:40.460260  142150 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:03:40.472139  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:40.616029  142150 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:03:40.724787  142150 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:03:40.724874  142150 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:03:40.732096  142150 start.go:563] Will wait 60s for crictl version
	I1212 01:03:40.732168  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:40.737266  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:03:40.790677  142150 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:03:40.790765  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.825617  142150 ssh_runner.go:195] Run: crio --version
	I1212 01:03:40.857257  142150 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1212 01:03:37.750453  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.752224  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:39.374093  141411 main.go:141] libmachine: (no-preload-242725) Calling .Start
	I1212 01:03:39.374303  141411 main.go:141] libmachine: (no-preload-242725) Ensuring networks are active...
	I1212 01:03:39.375021  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network default is active
	I1212 01:03:39.375456  141411 main.go:141] libmachine: (no-preload-242725) Ensuring network mk-no-preload-242725 is active
	I1212 01:03:39.375951  141411 main.go:141] libmachine: (no-preload-242725) Getting domain xml...
	I1212 01:03:39.376726  141411 main.go:141] libmachine: (no-preload-242725) Creating domain...
	I1212 01:03:40.703754  141411 main.go:141] libmachine: (no-preload-242725) Waiting to get IP...
	I1212 01:03:40.705274  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.705752  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.705821  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.705709  143226 retry.go:31] will retry after 196.576482ms: waiting for machine to come up
	I1212 01:03:40.904341  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:40.904718  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:40.904740  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:40.904669  143226 retry.go:31] will retry after 375.936901ms: waiting for machine to come up
	I1212 01:03:41.282278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.282839  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.282871  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.282793  143226 retry.go:31] will retry after 427.731576ms: waiting for machine to come up
	I1212 01:03:41.712553  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:41.713198  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:41.713231  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:41.713084  143226 retry.go:31] will retry after 421.07445ms: waiting for machine to come up
	I1212 01:03:39.707174  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:41.711103  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.207685  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:40.858851  142150 main.go:141] libmachine: (old-k8s-version-738445) Calling .GetIP
	I1212 01:03:40.861713  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862135  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:00:e1:06", ip: ""} in network mk-old-k8s-version-738445: {Iface:virbr4 ExpiryTime:2024-12-12 02:03:31 +0000 UTC Type:0 Mac:52:54:00:00:e1:06 Iaid: IPaddr:192.168.72.25 Prefix:24 Hostname:old-k8s-version-738445 Clientid:01:52:54:00:00:e1:06}
	I1212 01:03:40.862166  142150 main.go:141] libmachine: (old-k8s-version-738445) DBG | domain old-k8s-version-738445 has defined IP address 192.168.72.25 and MAC address 52:54:00:00:e1:06 in network mk-old-k8s-version-738445
	I1212 01:03:40.862355  142150 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1212 01:03:40.866911  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:40.879513  142150 kubeadm.go:883] updating cluster {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:03:40.879655  142150 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1212 01:03:40.879718  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:40.927436  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:40.927517  142150 ssh_runner.go:195] Run: which lz4
	I1212 01:03:40.932446  142150 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 01:03:40.937432  142150 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 01:03:40.937461  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1212 01:03:42.695407  142150 crio.go:462] duration metric: took 1.763008004s to copy over tarball
	I1212 01:03:42.695494  142150 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 01:03:41.768335  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.252708  141884 pod_ready.go:103] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:44.754333  141884 pod_ready.go:93] pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.754362  141884 pod_ready.go:82] duration metric: took 11.010925207s for pod "coredns-7c65d6cfc9-xn886" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.754371  141884 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760121  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.760142  141884 pod_ready.go:82] duration metric: took 5.764171ms for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.760151  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765554  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:44.765575  141884 pod_ready.go:82] duration metric: took 5.417017ms for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:44.765589  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:42.135878  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.136341  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.136367  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.136284  143226 retry.go:31] will retry after 477.81881ms: waiting for machine to come up
	I1212 01:03:42.616400  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:42.616906  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:42.616929  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:42.616858  143226 retry.go:31] will retry after 597.608319ms: waiting for machine to come up
	I1212 01:03:43.215837  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:43.216430  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:43.216454  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:43.216363  143226 retry.go:31] will retry after 1.118837214s: waiting for machine to come up
	I1212 01:03:44.336666  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:44.337229  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:44.337253  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:44.337187  143226 retry.go:31] will retry after 1.008232952s: waiting for machine to come up
	I1212 01:03:45.346868  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:45.347386  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:45.347423  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:45.347307  143226 retry.go:31] will retry after 1.735263207s: waiting for machine to come up
	I1212 01:03:47.084570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:47.084980  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:47.085012  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:47.084931  143226 retry.go:31] will retry after 1.662677797s: waiting for machine to come up
	I1212 01:03:46.208324  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.707694  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:45.698009  142150 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.002470206s)
	I1212 01:03:45.698041  142150 crio.go:469] duration metric: took 3.002598421s to extract the tarball
	I1212 01:03:45.698057  142150 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 01:03:45.746245  142150 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:03:45.783730  142150 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1212 01:03:45.783758  142150 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:03:45.783842  142150 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.783850  142150 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.783909  142150 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.783919  142150 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:45.783965  142150 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.783988  142150 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.783989  142150 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.783935  142150 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:45.785706  142150 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:45.785722  142150 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1212 01:03:45.785696  142150 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:45.785711  142150 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:45.785692  142150 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:45.785757  142150 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.010563  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.011085  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1212 01:03:46.072381  142150 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1212 01:03:46.072424  142150 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.072478  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.113400  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.113431  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.114036  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.114169  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.120739  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.124579  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.124728  142150 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1212 01:03:46.124754  142150 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1212 01:03:46.124784  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287160  142150 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1212 01:03:46.287214  142150 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.287266  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.287272  142150 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1212 01:03:46.287303  142150 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.287353  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294327  142150 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1212 01:03:46.294369  142150 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.294417  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294420  142150 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1212 01:03:46.294451  142150 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.294488  142150 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1212 01:03:46.294501  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294519  142150 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.294547  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.294561  142150 ssh_runner.go:195] Run: which crictl
	I1212 01:03:46.294640  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.296734  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.297900  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.310329  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.400377  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.400443  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1212 01:03:46.400478  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.400489  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.426481  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.434403  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.434471  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.568795  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1212 01:03:46.568915  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1212 01:03:46.568956  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.569017  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.584299  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1212 01:03:46.584337  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1212 01:03:46.608442  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1212 01:03:46.716715  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1212 01:03:46.716749  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1212 01:03:46.727723  142150 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1212 01:03:46.730180  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1212 01:03:46.730347  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1212 01:03:46.744080  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1212 01:03:46.770152  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1212 01:03:46.802332  142150 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1212 01:03:48.053863  142150 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:03:48.197060  142150 cache_images.go:92] duration metric: took 2.413284252s to LoadCachedImages
	W1212 01:03:48.197176  142150 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1212 01:03:48.197197  142150 kubeadm.go:934] updating node { 192.168.72.25 8443 v1.20.0 crio true true} ...
	I1212 01:03:48.197352  142150 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-738445 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.25
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:03:48.197443  142150 ssh_runner.go:195] Run: crio config
	I1212 01:03:48.246700  142150 cni.go:84] Creating CNI manager for ""
	I1212 01:03:48.246731  142150 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:03:48.246743  142150 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:03:48.246771  142150 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.25 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-738445 NodeName:old-k8s-version-738445 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.25"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.25 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1212 01:03:48.246952  142150 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.25
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-738445"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.25
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.25"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:03:48.247031  142150 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1212 01:03:48.257337  142150 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:03:48.257412  142150 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:03:48.267272  142150 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I1212 01:03:48.284319  142150 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:03:48.301365  142150 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1212 01:03:48.321703  142150 ssh_runner.go:195] Run: grep 192.168.72.25	control-plane.minikube.internal$ /etc/hosts
	I1212 01:03:48.326805  142150 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.25	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:03:48.343523  142150 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:03:48.476596  142150 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:03:48.497742  142150 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445 for IP: 192.168.72.25
	I1212 01:03:48.497830  142150 certs.go:194] generating shared ca certs ...
	I1212 01:03:48.497859  142150 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:48.498094  142150 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:03:48.498160  142150 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:03:48.498177  142150 certs.go:256] generating profile certs ...
	I1212 01:03:48.498311  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.key
	I1212 01:03:48.498388  142150 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key.2e4d2e55
	I1212 01:03:48.498445  142150 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key
	I1212 01:03:48.498603  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:03:48.498651  142150 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:03:48.498665  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:03:48.498700  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:03:48.498732  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:03:48.498761  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:03:48.498816  142150 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:03:48.499418  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:03:48.546900  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:03:48.587413  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:03:48.617873  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:03:48.645334  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1212 01:03:48.673348  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 01:03:48.707990  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:03:48.748273  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:03:48.785187  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:03:48.818595  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:03:48.843735  142150 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:03:48.871353  142150 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:03:48.893168  142150 ssh_runner.go:195] Run: openssl version
	I1212 01:03:48.902034  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:03:48.916733  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921766  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.921849  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:03:48.928169  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:03:48.939794  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:03:48.951260  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957920  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.957987  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:03:48.965772  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:03:48.977889  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:03:48.989362  142150 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995796  142150 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:03:48.995866  142150 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:03:49.002440  142150 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:03:49.014144  142150 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:03:49.020570  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:03:49.027464  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:03:49.033770  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:03:49.040087  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:03:49.046103  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:03:49.052288  142150 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:03:49.058638  142150 kubeadm.go:392] StartCluster: {Name:old-k8s-version-738445 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-738445 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.25 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:03:49.058762  142150 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:03:49.058820  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.101711  142150 cri.go:89] found id: ""
	I1212 01:03:49.101800  142150 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:03:49.113377  142150 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:03:49.113398  142150 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:03:49.113439  142150 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:03:49.124296  142150 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:03:49.125851  142150 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-738445" does not appear in /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:03:49.126876  142150 kubeconfig.go:62] /home/jenkins/minikube-integration/20083-86355/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-738445" cluster setting kubeconfig missing "old-k8s-version-738445" context setting]
	I1212 01:03:49.127925  142150 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:03:49.129837  142150 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:03:49.143200  142150 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.25
	I1212 01:03:49.143244  142150 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:03:49.143262  142150 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:03:49.143339  142150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:03:49.190150  142150 cri.go:89] found id: ""
	I1212 01:03:49.190240  142150 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:03:49.208500  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:03:49.219194  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:03:49.219221  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:03:49.219299  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:03:49.231345  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:03:49.231442  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:03:49.244931  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:03:49.254646  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:03:49.254721  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:03:49.264535  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.273770  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:03:49.273875  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:03:49.284129  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:03:49.293154  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:03:49.293221  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:03:49.302654  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:03:49.312579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:49.458825  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:48.069316  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.069362  141884 pod_ready.go:82] duration metric: took 3.303763458s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.069380  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328758  141884 pod_ready.go:93] pod "kube-proxy-7frgh" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.328784  141884 pod_ready.go:82] duration metric: took 259.396178ms for pod "kube-proxy-7frgh" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.328798  141884 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337082  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:03:48.337106  141884 pod_ready.go:82] duration metric: took 8.298777ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:48.337119  141884 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	I1212 01:03:50.343458  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:48.748914  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:48.749510  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:48.749535  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:48.749475  143226 retry.go:31] will retry after 2.670904101s: waiting for machine to come up
	I1212 01:03:51.421499  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:51.421915  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:51.421961  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:51.421862  143226 retry.go:31] will retry after 3.566697123s: waiting for machine to come up
	I1212 01:03:50.708435  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:53.207675  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:50.328104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.599973  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.749920  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:03:50.834972  142150 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:03:50.835093  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.335779  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:51.835728  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.335936  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.335817  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:53.836146  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.335264  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:54.835917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:52.344098  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.344166  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:56.345835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:54.990515  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:54.990916  141411 main.go:141] libmachine: (no-preload-242725) DBG | unable to find current IP address of domain no-preload-242725 in network mk-no-preload-242725
	I1212 01:03:54.990941  141411 main.go:141] libmachine: (no-preload-242725) DBG | I1212 01:03:54.990869  143226 retry.go:31] will retry after 4.288131363s: waiting for machine to come up
	I1212 01:03:55.706167  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:57.707796  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:55.335677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:55.835164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.335826  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:56.835888  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.335539  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:57.835520  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.335630  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.835457  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:59.835939  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:03:58.843944  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.844210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:03:59.284312  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.284807  141411 main.go:141] libmachine: (no-preload-242725) Found IP for machine: 192.168.61.222
	I1212 01:03:59.284834  141411 main.go:141] libmachine: (no-preload-242725) Reserving static IP address...
	I1212 01:03:59.284851  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has current primary IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.285300  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.285334  141411 main.go:141] libmachine: (no-preload-242725) DBG | skip adding static IP to network mk-no-preload-242725 - found existing host DHCP lease matching {name: "no-preload-242725", mac: "52:54:00:ab:6f:4a", ip: "192.168.61.222"}
	I1212 01:03:59.285357  141411 main.go:141] libmachine: (no-preload-242725) Reserved static IP address: 192.168.61.222
	I1212 01:03:59.285376  141411 main.go:141] libmachine: (no-preload-242725) Waiting for SSH to be available...
	I1212 01:03:59.285390  141411 main.go:141] libmachine: (no-preload-242725) DBG | Getting to WaitForSSH function...
	I1212 01:03:59.287532  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287840  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.287869  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.287970  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH client type: external
	I1212 01:03:59.287998  141411 main.go:141] libmachine: (no-preload-242725) DBG | Using SSH private key: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa (-rw-------)
	I1212 01:03:59.288043  141411 main.go:141] libmachine: (no-preload-242725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1212 01:03:59.288066  141411 main.go:141] libmachine: (no-preload-242725) DBG | About to run SSH command:
	I1212 01:03:59.288092  141411 main.go:141] libmachine: (no-preload-242725) DBG | exit 0
	I1212 01:03:59.415723  141411 main.go:141] libmachine: (no-preload-242725) DBG | SSH cmd err, output: <nil>: 
	I1212 01:03:59.416104  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetConfigRaw
	I1212 01:03:59.416755  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.419446  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.419848  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.419879  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.420182  141411 profile.go:143] Saving config to /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/config.json ...
	I1212 01:03:59.420388  141411 machine.go:93] provisionDockerMachine start ...
	I1212 01:03:59.420412  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:03:59.420637  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.422922  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423257  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.423278  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.423432  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.423626  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423787  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.423918  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.424051  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.424222  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.424231  141411 main.go:141] libmachine: About to run SSH command:
	hostname
	I1212 01:03:59.536768  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1212 01:03:59.536796  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537016  141411 buildroot.go:166] provisioning hostname "no-preload-242725"
	I1212 01:03:59.537042  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.537234  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.539806  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540110  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.540141  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.540337  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.540509  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540665  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.540800  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.540973  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.541155  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.541171  141411 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-242725 && echo "no-preload-242725" | sudo tee /etc/hostname
	I1212 01:03:59.668244  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-242725
	
	I1212 01:03:59.668269  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.671021  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671353  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.671374  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.671630  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.671851  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672000  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.672160  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.672310  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:03:59.672485  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:03:59.672502  141411 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-242725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-242725/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-242725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 01:03:59.792950  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1212 01:03:59.792985  141411 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20083-86355/.minikube CaCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20083-86355/.minikube}
	I1212 01:03:59.793011  141411 buildroot.go:174] setting up certificates
	I1212 01:03:59.793024  141411 provision.go:84] configureAuth start
	I1212 01:03:59.793041  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetMachineName
	I1212 01:03:59.793366  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:03:59.796185  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796599  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.796638  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.796783  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.799165  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799532  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.799558  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.799711  141411 provision.go:143] copyHostCerts
	I1212 01:03:59.799780  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem, removing ...
	I1212 01:03:59.799804  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem
	I1212 01:03:59.799869  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/key.pem (1675 bytes)
	I1212 01:03:59.800004  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem, removing ...
	I1212 01:03:59.800015  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem
	I1212 01:03:59.800051  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/ca.pem (1078 bytes)
	I1212 01:03:59.800144  141411 exec_runner.go:144] found /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem, removing ...
	I1212 01:03:59.800155  141411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem
	I1212 01:03:59.800182  141411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20083-86355/.minikube/cert.pem (1123 bytes)
	I1212 01:03:59.800263  141411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem org=jenkins.no-preload-242725 san=[127.0.0.1 192.168.61.222 localhost minikube no-preload-242725]
	I1212 01:03:59.987182  141411 provision.go:177] copyRemoteCerts
	I1212 01:03:59.987249  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 01:03:59.987290  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:03:59.989902  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990285  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:03:59.990317  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:03:59.990520  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:03:59.990712  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:03:59.990856  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:03:59.990981  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.078289  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 01:04:00.103149  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1212 01:04:00.131107  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 01:04:00.159076  141411 provision.go:87] duration metric: took 366.034024ms to configureAuth
	I1212 01:04:00.159103  141411 buildroot.go:189] setting minikube options for container-runtime
	I1212 01:04:00.159305  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:04:00.159401  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.162140  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162537  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.162570  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.162696  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.162864  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163016  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.163124  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.163262  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.163436  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.163451  141411 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 01:04:00.407729  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 01:04:00.407758  141411 machine.go:96] duration metric: took 987.35601ms to provisionDockerMachine
	I1212 01:04:00.407773  141411 start.go:293] postStartSetup for "no-preload-242725" (driver="kvm2")
	I1212 01:04:00.407787  141411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 01:04:00.407810  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.408186  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 01:04:00.408218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.410950  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411329  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.411360  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.411585  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.411809  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.411981  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.412115  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.498221  141411 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 01:04:00.502621  141411 info.go:137] Remote host: Buildroot 2023.02.9
	I1212 01:04:00.502644  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/addons for local assets ...
	I1212 01:04:00.502705  141411 filesync.go:126] Scanning /home/jenkins/minikube-integration/20083-86355/.minikube/files for local assets ...
	I1212 01:04:00.502779  141411 filesync.go:149] local asset: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem -> 936002.pem in /etc/ssl/certs
	I1212 01:04:00.502863  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 01:04:00.512322  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:00.540201  141411 start.go:296] duration metric: took 132.410555ms for postStartSetup
	I1212 01:04:00.540250  141411 fix.go:56] duration metric: took 21.191260423s for fixHost
	I1212 01:04:00.540287  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.542631  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.542983  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.543011  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.543212  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.543393  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543556  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.543702  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.543867  141411 main.go:141] libmachine: Using SSH client type: native
	I1212 01:04:00.544081  141411 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil>  [] 0s} 192.168.61.222 22 <nil> <nil>}
	I1212 01:04:00.544095  141411 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1212 01:04:00.656532  141411 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733965440.609922961
	
	I1212 01:04:00.656560  141411 fix.go:216] guest clock: 1733965440.609922961
	I1212 01:04:00.656569  141411 fix.go:229] Guest: 2024-12-12 01:04:00.609922961 +0000 UTC Remote: 2024-12-12 01:04:00.540255801 +0000 UTC m=+358.475944555 (delta=69.66716ms)
	I1212 01:04:00.656597  141411 fix.go:200] guest clock delta is within tolerance: 69.66716ms
	I1212 01:04:00.656616  141411 start.go:83] releasing machines lock for "no-preload-242725", held for 21.307670093s
	I1212 01:04:00.656644  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.656898  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:00.659345  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659694  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.659722  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.659878  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660405  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660584  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:04:00.660663  141411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 01:04:00.660731  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.660751  141411 ssh_runner.go:195] Run: cat /version.json
	I1212 01:04:00.660771  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:04:00.663331  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663458  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663717  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663757  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663789  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:00.663802  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:00.663867  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664039  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664044  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:04:00.664201  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664202  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:04:00.664359  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:04:00.664359  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.664490  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:04:00.777379  141411 ssh_runner.go:195] Run: systemctl --version
	I1212 01:04:00.783765  141411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 01:04:00.933842  141411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 01:04:00.941376  141411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 01:04:00.941441  141411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 01:04:00.958993  141411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1212 01:04:00.959021  141411 start.go:495] detecting cgroup driver to use...
	I1212 01:04:00.959084  141411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 01:04:00.977166  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 01:04:00.991166  141411 docker.go:217] disabling cri-docker service (if available) ...
	I1212 01:04:00.991231  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 01:04:01.004993  141411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 01:04:01.018654  141411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 01:04:01.136762  141411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 01:04:01.300915  141411 docker.go:233] disabling docker service ...
	I1212 01:04:01.301036  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 01:04:01.316124  141411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 01:04:01.329544  141411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 01:04:01.451034  141411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 01:04:01.583471  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 01:04:01.611914  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 01:04:01.632628  141411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1212 01:04:01.632706  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.644315  141411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 01:04:01.644384  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.656980  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.668295  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.679885  141411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 01:04:01.692032  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.703893  141411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.724486  141411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 01:04:01.737251  141411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 01:04:01.748955  141411 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1212 01:04:01.749025  141411 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1212 01:04:01.763688  141411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 01:04:01.773871  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:01.903690  141411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 01:04:02.006921  141411 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 01:04:02.007013  141411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 01:04:02.013116  141411 start.go:563] Will wait 60s for crictl version
	I1212 01:04:02.013187  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.017116  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1212 01:04:02.061210  141411 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1212 01:04:02.061304  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.093941  141411 ssh_runner.go:195] Run: crio --version
	I1212 01:04:02.124110  141411 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1212 01:03:59.708028  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:01.709056  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:04.207527  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:00.335673  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:00.835254  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.336063  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:01.835209  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.335874  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.835468  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.335332  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:03.835312  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.335965  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:04.835626  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:02.845618  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.346194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:02.125647  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetIP
	I1212 01:04:02.128481  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.128914  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:04:02.128973  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:04:02.129205  141411 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1212 01:04:02.133801  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:02.148892  141411 kubeadm.go:883] updating cluster {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 01:04:02.149001  141411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1212 01:04:02.149033  141411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 01:04:02.187762  141411 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1212 01:04:02.187805  141411 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1212 01:04:02.187934  141411 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.187988  141411 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.188025  141411 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.188070  141411 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.188118  141411 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.188220  141411 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.188332  141411 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1212 01:04:02.188501  141411 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.189594  141411 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.189674  141411 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.189892  141411 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.190015  141411 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1212 01:04:02.190121  141411 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.190152  141411 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.190169  141411 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.190746  141411 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:02.372557  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.375185  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.389611  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.394581  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.396799  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.408346  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1212 01:04:02.413152  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.438165  141411 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1212 01:04:02.438217  141411 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.438272  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.518752  141411 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1212 01:04:02.518804  141411 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.518856  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.556287  141411 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1212 01:04:02.556329  141411 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.556371  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569629  141411 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1212 01:04:02.569671  141411 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.569683  141411 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1212 01:04:02.569721  141411 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.569731  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.569770  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667454  141411 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1212 01:04:02.667511  141411 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.667510  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.667532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.667549  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:02.667632  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.667644  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.667671  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.683807  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.784024  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.797709  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.797836  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.797848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.797969  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.822411  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:02.880580  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1212 01:04:02.927305  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1212 01:04:02.928532  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1212 01:04:02.928661  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1212 01:04:02.938172  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1212 01:04:02.973083  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1212 01:04:03.023699  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1212 01:04:03.023813  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.069822  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1212 01:04:03.069879  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1212 01:04:03.069920  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1212 01:04:03.069945  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:03.069973  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:03.069990  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:03.070037  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1212 01:04:03.070116  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:03.094188  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1212 01:04:03.094210  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094229  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1212 01:04:03.094249  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1212 01:04:03.094285  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1212 01:04:03.094313  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1212 01:04:03.094379  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1212 01:04:03.094399  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1212 01:04:03.094480  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:04.469173  141411 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.174822  141411 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.080313699s)
	I1212 01:04:05.174869  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1212 01:04:05.174899  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.08062641s)
	I1212 01:04:05.174928  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1212 01:04:05.174968  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.174994  141411 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1212 01:04:05.175034  141411 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:05.175086  141411 ssh_runner.go:195] Run: which crictl
	I1212 01:04:05.175038  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1212 01:04:05.179340  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:06.207626  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:08.706815  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:05.335479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:05.835485  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.335252  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:06.835837  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.335166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.835880  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.336166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:08.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.335533  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:09.835771  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:07.843908  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:07.654693  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (2.479543185s)
	I1212 01:04:07.654721  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1212 01:04:07.654743  141411 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.654775  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.475408038s)
	I1212 01:04:07.654848  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:07.654784  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1212 01:04:07.699286  141411 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:04:09.647620  141411 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.948278157s)
	I1212 01:04:09.647642  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.992718083s)
	I1212 01:04:09.647662  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1212 01:04:09.647683  141411 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1212 01:04:09.647686  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647734  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1212 01:04:09.647776  141411 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:09.652886  141411 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1212 01:04:11.112349  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.464585062s)
	I1212 01:04:11.112384  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1212 01:04:11.112412  141411 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.112462  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1212 01:04:11.206933  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.208623  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:10.335255  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:10.835915  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.335375  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:11.835283  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.335618  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.835897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.335425  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:13.835757  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.335839  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:14.836078  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:12.844442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:14.845189  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:13.083753  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (1.971262547s)
	I1212 01:04:13.083788  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1212 01:04:13.083821  141411 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:13.083878  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1212 01:04:17.087777  141411 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.003870257s)
	I1212 01:04:17.087818  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1212 01:04:17.087853  141411 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:17.087917  141411 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1212 01:04:15.707981  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:18.207205  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:15.336090  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:15.835274  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.335372  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:16.835280  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.335431  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.835268  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.335492  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:18.835414  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.335266  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:19.835632  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:17.345467  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:19.845255  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:17.734979  141411 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20083-86355/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1212 01:04:17.735041  141411 cache_images.go:123] Successfully loaded all cached images
	I1212 01:04:17.735049  141411 cache_images.go:92] duration metric: took 15.547226992s to LoadCachedImages
	I1212 01:04:17.735066  141411 kubeadm.go:934] updating node { 192.168.61.222 8443 v1.31.2 crio true true} ...
	I1212 01:04:17.735209  141411 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-242725 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 01:04:17.735311  141411 ssh_runner.go:195] Run: crio config
	I1212 01:04:17.780826  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:17.780850  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:17.780859  141411 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1212 01:04:17.780882  141411 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.222 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-242725 NodeName:no-preload-242725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 01:04:17.781025  141411 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.222
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-242725"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.222"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.222"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 01:04:17.781091  141411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1212 01:04:17.792290  141411 binaries.go:44] Found k8s binaries, skipping transfer
	I1212 01:04:17.792374  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 01:04:17.802686  141411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1212 01:04:17.819496  141411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 01:04:17.836164  141411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1212 01:04:17.855844  141411 ssh_runner.go:195] Run: grep 192.168.61.222	control-plane.minikube.internal$ /etc/hosts
	I1212 01:04:17.860034  141411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 01:04:17.874418  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:04:18.011357  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:04:18.028641  141411 certs.go:68] Setting up /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725 for IP: 192.168.61.222
	I1212 01:04:18.028666  141411 certs.go:194] generating shared ca certs ...
	I1212 01:04:18.028683  141411 certs.go:226] acquiring lock for ca certs: {Name:mka9ea18513c4060e2e33ca32fb36d76a5887cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:04:18.028880  141411 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key
	I1212 01:04:18.028940  141411 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key
	I1212 01:04:18.028954  141411 certs.go:256] generating profile certs ...
	I1212 01:04:18.029088  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.key
	I1212 01:04:18.029164  141411 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key.f2ca822e
	I1212 01:04:18.029235  141411 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key
	I1212 01:04:18.029404  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem (1338 bytes)
	W1212 01:04:18.029438  141411 certs.go:480] ignoring /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600_empty.pem, impossibly tiny 0 bytes
	I1212 01:04:18.029449  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 01:04:18.029485  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/ca.pem (1078 bytes)
	I1212 01:04:18.029517  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/cert.pem (1123 bytes)
	I1212 01:04:18.029555  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/certs/key.pem (1675 bytes)
	I1212 01:04:18.029621  141411 certs.go:484] found cert: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem (1708 bytes)
	I1212 01:04:18.030313  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 01:04:18.082776  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1212 01:04:18.116012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 01:04:18.147385  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 01:04:18.180861  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 01:04:18.225067  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 01:04:18.255999  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 01:04:18.280193  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 01:04:18.304830  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/ssl/certs/936002.pem --> /usr/share/ca-certificates/936002.pem (1708 bytes)
	I1212 01:04:18.329012  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 01:04:18.355462  141411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20083-86355/.minikube/certs/93600.pem --> /usr/share/ca-certificates/93600.pem (1338 bytes)
	I1212 01:04:18.379991  141411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 01:04:18.397637  141411 ssh_runner.go:195] Run: openssl version
	I1212 01:04:18.403727  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93600.pem && ln -fs /usr/share/ca-certificates/93600.pem /etc/ssl/certs/93600.pem"
	I1212 01:04:18.415261  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419809  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 11 23:49 /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.419885  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93600.pem
	I1212 01:04:18.425687  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/93600.pem /etc/ssl/certs/51391683.0"
	I1212 01:04:18.438938  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936002.pem && ln -fs /usr/share/ca-certificates/936002.pem /etc/ssl/certs/936002.pem"
	I1212 01:04:18.452150  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457050  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 11 23:49 /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.457116  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936002.pem
	I1212 01:04:18.463151  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/936002.pem /etc/ssl/certs/3ec20f2e.0"
	I1212 01:04:18.476193  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1212 01:04:18.489034  141411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493916  141411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.493969  141411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 01:04:18.500285  141411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1212 01:04:18.513016  141411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 01:04:18.517996  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 01:04:18.524465  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 01:04:18.530607  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 01:04:18.536857  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 01:04:18.542734  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 01:04:18.548786  141411 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 01:04:18.554771  141411 kubeadm.go:392] StartCluster: {Name:no-preload-242725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-242725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 01:04:18.554897  141411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 01:04:18.554950  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.593038  141411 cri.go:89] found id: ""
	I1212 01:04:18.593131  141411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 01:04:18.604527  141411 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1212 01:04:18.604550  141411 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1212 01:04:18.604605  141411 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 01:04:18.614764  141411 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 01:04:18.616082  141411 kubeconfig.go:125] found "no-preload-242725" server: "https://192.168.61.222:8443"
	I1212 01:04:18.618611  141411 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 01:04:18.628709  141411 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.222
	I1212 01:04:18.628741  141411 kubeadm.go:1160] stopping kube-system containers ...
	I1212 01:04:18.628753  141411 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 01:04:18.628814  141411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 01:04:18.673970  141411 cri.go:89] found id: ""
	I1212 01:04:18.674067  141411 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 01:04:18.692603  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:04:18.704916  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:04:18.704940  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:04:18.704999  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:04:18.714952  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:04:18.715015  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:04:18.724982  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:04:18.734756  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:04:18.734817  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:04:18.744528  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.753898  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:04:18.753955  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:04:18.763929  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:04:18.773108  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:04:18.773153  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:04:18.782710  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:04:18.792750  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:18.902446  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.056638  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.154145942s)
	I1212 01:04:20.056677  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.275475  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.348697  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:20.483317  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:04:20.483487  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.983704  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.484485  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.526353  141411 api_server.go:72] duration metric: took 1.043031812s to wait for apiserver process to appear ...
	I1212 01:04:21.526389  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:04:21.526415  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:20.207458  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:22.212936  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:20.335276  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:20.835232  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.335776  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:21.835983  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.335369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:22.836160  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.335257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:23.835348  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.336170  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.835521  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:24.362548  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.362574  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.362586  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.380904  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 01:04:24.380939  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 01:04:24.527174  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:24.533112  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:24.533146  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.026678  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.031368  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.031409  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:25.526576  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:25.532260  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 01:04:25.532297  141411 api_server.go:103] status: https://192.168.61.222:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 01:04:26.026741  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:04:26.031841  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:04:26.038198  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:04:26.038228  141411 api_server.go:131] duration metric: took 4.511829936s to wait for apiserver health ...
	I1212 01:04:26.038240  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:04:26.038249  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:04:26.040150  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:04:22.343994  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:24.344818  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.346428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:26.041669  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:04:26.055010  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:04:26.076860  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:04:26.092122  141411 system_pods.go:59] 8 kube-system pods found
	I1212 01:04:26.092154  141411 system_pods.go:61] "coredns-7c65d6cfc9-7w9dc" [878bfb78-fae5-4e05-b0ae-362841eace85] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 01:04:26.092163  141411 system_pods.go:61] "etcd-no-preload-242725" [ed97c029-7933-4f4e-ab6c-f514b963ce21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 01:04:26.092170  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [df66d12b-b847-4ef3-b610-5679ff50e8c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 01:04:26.092175  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [eb5bc914-4267-41e8-9b37-26b7d3da9f68] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 01:04:26.092180  141411 system_pods.go:61] "kube-proxy-rjwps" [fccefb3e-a282-4f0e-9070-11cc95bca868] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1212 01:04:26.092185  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [139de4ad-468c-4f1b-becf-3708bcaa7c8f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 01:04:26.092190  141411 system_pods.go:61] "metrics-server-6867b74b74-xzkbn" [16e0364c-18f9-43c2-9394-bc8548ce9caa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:04:26.092194  141411 system_pods.go:61] "storage-provisioner" [06c3232e-011a-4aff-b3ca-81858355bef4] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 01:04:26.092200  141411 system_pods.go:74] duration metric: took 15.315757ms to wait for pod list to return data ...
	I1212 01:04:26.092208  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:04:26.095691  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:04:26.095715  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:04:26.095725  141411 node_conditions.go:105] duration metric: took 3.513466ms to run NodePressure ...
	I1212 01:04:26.095742  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 01:04:26.389652  141411 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398484  141411 kubeadm.go:739] kubelet initialised
	I1212 01:04:26.398513  141411 kubeadm.go:740] duration metric: took 8.824036ms waiting for restarted kubelet to initialise ...
	I1212 01:04:26.398524  141411 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:04:26.406667  141411 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.416093  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416137  141411 pod_ready.go:82] duration metric: took 9.418311ms for pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.416151  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "coredns-7c65d6cfc9-7w9dc" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.416165  141411 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.422922  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422951  141411 pod_ready.go:82] duration metric: took 6.774244ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.422962  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "etcd-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.422971  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.429822  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429854  141411 pod_ready.go:82] duration metric: took 6.874602ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.429866  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-apiserver-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.429875  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:26.483542  141411 pod_ready.go:98] node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483578  141411 pod_ready.go:82] duration metric: took 53.690915ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	E1212 01:04:26.483609  141411 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-242725" hosting pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-242725" has status "Ready":"False"
	I1212 01:04:26.483622  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:24.707572  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:27.207073  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:25.335742  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:25.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.335824  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:26.836097  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.335807  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:27.835612  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.335615  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.835140  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.335695  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:29.836018  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:28.843868  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.844684  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:28.081872  141411 pod_ready.go:93] pod "kube-proxy-rjwps" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:28.081901  141411 pod_ready.go:82] duration metric: took 1.598267411s for pod "kube-proxy-rjwps" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:28.081921  141411 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:30.088965  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:32.099574  141411 pod_ready.go:103] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:29.706557  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:31.706767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:33.706983  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:30.335304  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:30.835767  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.335536  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:31.836051  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.336149  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:32.835257  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.335529  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.835959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.336054  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:34.835955  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:33.344074  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.345401  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:34.588690  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:04:34.588715  141411 pod_ready.go:82] duration metric: took 6.50678624s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:34.588727  141411 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	I1212 01:04:36.596475  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:36.207357  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:38.207516  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:35.335472  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:35.835166  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.335337  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:36.835553  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.336098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.835686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.335195  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:38.835464  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.336101  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:39.836164  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:37.844602  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.845115  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:39.095215  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:41.594487  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.708001  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:42.708477  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:40.336111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:40.835714  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.335249  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:41.836111  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.335205  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.836175  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.335577  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:43.835336  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.335947  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:44.835740  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:42.344150  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.844336  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:43.595231  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:46.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:44.708857  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:47.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.207408  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:45.335845  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:45.835169  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.335842  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.835872  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.335682  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:47.835761  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.336087  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:48.835234  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.335460  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:49.836134  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:46.844848  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:49.344941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:48.595492  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.095830  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:51.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.706544  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:50.335959  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:50.835873  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:50.835996  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:50.878308  142150 cri.go:89] found id: ""
	I1212 01:04:50.878347  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.878360  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:50.878377  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:50.878444  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:50.914645  142150 cri.go:89] found id: ""
	I1212 01:04:50.914673  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.914681  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:50.914687  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:50.914736  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:50.954258  142150 cri.go:89] found id: ""
	I1212 01:04:50.954286  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.954307  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:50.954314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:50.954376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:50.993317  142150 cri.go:89] found id: ""
	I1212 01:04:50.993353  142150 logs.go:282] 0 containers: []
	W1212 01:04:50.993361  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:50.993367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:50.993430  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:51.028521  142150 cri.go:89] found id: ""
	I1212 01:04:51.028551  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.028565  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:51.028572  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:51.028653  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:51.064752  142150 cri.go:89] found id: ""
	I1212 01:04:51.064779  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.064791  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:51.064799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:51.064861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:51.099780  142150 cri.go:89] found id: ""
	I1212 01:04:51.099809  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.099820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:51.099828  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:51.099910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:51.140668  142150 cri.go:89] found id: ""
	I1212 01:04:51.140696  142150 logs.go:282] 0 containers: []
	W1212 01:04:51.140704  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:51.140713  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:51.140747  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.181092  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:51.181123  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:51.239873  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:51.239914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:51.256356  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:51.256383  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:51.391545  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:51.391573  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:51.391602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:53.965098  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:53.981900  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:53.981994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:54.033922  142150 cri.go:89] found id: ""
	I1212 01:04:54.033955  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.033967  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:54.033975  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:54.034038  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:54.084594  142150 cri.go:89] found id: ""
	I1212 01:04:54.084623  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.084634  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:54.084641  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:54.084704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:54.132671  142150 cri.go:89] found id: ""
	I1212 01:04:54.132700  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.132708  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:54.132714  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:54.132768  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:54.169981  142150 cri.go:89] found id: ""
	I1212 01:04:54.170011  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.170019  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:54.170025  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:54.170078  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:54.207708  142150 cri.go:89] found id: ""
	I1212 01:04:54.207737  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.207747  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:54.207753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:54.207812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:54.248150  142150 cri.go:89] found id: ""
	I1212 01:04:54.248176  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.248184  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:54.248191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:54.248240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:54.287792  142150 cri.go:89] found id: ""
	I1212 01:04:54.287820  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.287829  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:54.287835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:54.287892  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:54.322288  142150 cri.go:89] found id: ""
	I1212 01:04:54.322319  142150 logs.go:282] 0 containers: []
	W1212 01:04:54.322330  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:54.322347  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:54.322364  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:54.378947  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:54.378989  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:54.394801  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:54.394845  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:54.473896  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:54.473916  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:54.473929  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:54.558076  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:54.558135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:04:51.843857  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:54.345207  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:53.095934  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.598377  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:55.706720  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.707883  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:57.102923  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:04:57.117418  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:04:57.117478  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:57.157977  142150 cri.go:89] found id: ""
	I1212 01:04:57.158003  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.158012  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:04:57.158017  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:04:57.158074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:04:57.196388  142150 cri.go:89] found id: ""
	I1212 01:04:57.196417  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.196427  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:04:57.196432  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:04:57.196484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:04:57.238004  142150 cri.go:89] found id: ""
	I1212 01:04:57.238040  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.238048  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:04:57.238055  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:04:57.238124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:04:57.276619  142150 cri.go:89] found id: ""
	I1212 01:04:57.276665  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.276676  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:04:57.276684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:04:57.276750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:04:57.313697  142150 cri.go:89] found id: ""
	I1212 01:04:57.313733  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.313745  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:04:57.313753  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:04:57.313823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:04:57.351569  142150 cri.go:89] found id: ""
	I1212 01:04:57.351616  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.351629  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:04:57.351637  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:04:57.351705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:04:57.386726  142150 cri.go:89] found id: ""
	I1212 01:04:57.386758  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.386766  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:04:57.386772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:04:57.386821  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:04:57.421496  142150 cri.go:89] found id: ""
	I1212 01:04:57.421524  142150 logs.go:282] 0 containers: []
	W1212 01:04:57.421533  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:04:57.421543  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:04:57.421555  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:04:57.475374  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:04:57.475425  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:04:57.490771  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:04:57.490813  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:04:57.562485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:04:57.562513  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:04:57.562530  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:04:57.645022  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:04:57.645070  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.193526  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:00.209464  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:00.209539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:04:56.843562  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.843654  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:01.343428  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:04:58.095640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.596162  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.207281  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:02.706000  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:00.248388  142150 cri.go:89] found id: ""
	I1212 01:05:00.248417  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.248426  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:00.248431  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:00.248480  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:00.284598  142150 cri.go:89] found id: ""
	I1212 01:05:00.284632  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.284642  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:00.284648  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:00.284710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:00.321068  142150 cri.go:89] found id: ""
	I1212 01:05:00.321107  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.321119  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:00.321127  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:00.321189  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:00.358622  142150 cri.go:89] found id: ""
	I1212 01:05:00.358651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.358660  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:00.358666  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:00.358720  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:00.398345  142150 cri.go:89] found id: ""
	I1212 01:05:00.398373  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.398383  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:00.398390  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:00.398442  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:00.437178  142150 cri.go:89] found id: ""
	I1212 01:05:00.437215  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.437227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:00.437235  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:00.437307  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:00.472621  142150 cri.go:89] found id: ""
	I1212 01:05:00.472651  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.472662  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:00.472668  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:00.472735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:00.510240  142150 cri.go:89] found id: ""
	I1212 01:05:00.510268  142150 logs.go:282] 0 containers: []
	W1212 01:05:00.510278  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:00.510288  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:00.510301  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:00.596798  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:00.596819  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:00.596830  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:00.673465  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:00.673506  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:00.716448  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:00.716485  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:00.770265  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:00.770303  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.285159  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:03.299981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:03.300043  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:03.335198  142150 cri.go:89] found id: ""
	I1212 01:05:03.335227  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.335239  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:03.335248  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:03.335319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:03.372624  142150 cri.go:89] found id: ""
	I1212 01:05:03.372651  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.372659  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:03.372665  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:03.372712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:03.408235  142150 cri.go:89] found id: ""
	I1212 01:05:03.408267  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.408279  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:03.408286  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:03.408350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:03.448035  142150 cri.go:89] found id: ""
	I1212 01:05:03.448068  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.448083  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:03.448091  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:03.448144  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:03.488563  142150 cri.go:89] found id: ""
	I1212 01:05:03.488593  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.488602  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:03.488607  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:03.488658  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:03.527858  142150 cri.go:89] found id: ""
	I1212 01:05:03.527886  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.527905  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:03.527913  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:03.527969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:03.564004  142150 cri.go:89] found id: ""
	I1212 01:05:03.564034  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.564044  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:03.564052  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:03.564113  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:03.610648  142150 cri.go:89] found id: ""
	I1212 01:05:03.610679  142150 logs.go:282] 0 containers: []
	W1212 01:05:03.610691  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:03.610702  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:03.610716  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:03.666958  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:03.666996  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:03.680927  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:03.680961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:03.762843  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:03.762876  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:03.762894  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:03.838434  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:03.838472  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:03.344025  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.844236  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:03.095197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:05.096865  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:04.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.208202  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:06.377590  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:06.391770  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:06.391861  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:06.430050  142150 cri.go:89] found id: ""
	I1212 01:05:06.430083  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.430096  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:06.430103  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:06.430168  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:06.467980  142150 cri.go:89] found id: ""
	I1212 01:05:06.468014  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.468026  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:06.468033  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:06.468090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:06.505111  142150 cri.go:89] found id: ""
	I1212 01:05:06.505144  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.505156  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:06.505165  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:06.505235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:06.542049  142150 cri.go:89] found id: ""
	I1212 01:05:06.542091  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.542104  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:06.542112  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:06.542175  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:06.576957  142150 cri.go:89] found id: ""
	I1212 01:05:06.576982  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.576991  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:06.576997  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:06.577050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:06.613930  142150 cri.go:89] found id: ""
	I1212 01:05:06.613963  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.613974  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:06.613980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:06.614045  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:06.654407  142150 cri.go:89] found id: ""
	I1212 01:05:06.654441  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.654450  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:06.654455  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:06.654503  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:06.691074  142150 cri.go:89] found id: ""
	I1212 01:05:06.691103  142150 logs.go:282] 0 containers: []
	W1212 01:05:06.691112  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:06.691122  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:06.691133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:06.748638  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:06.748674  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:06.762741  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:06.762772  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:06.833840  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:06.833867  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:06.833885  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:06.914595  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:06.914649  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.461666  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:09.478815  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:09.478889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:09.515975  142150 cri.go:89] found id: ""
	I1212 01:05:09.516007  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.516019  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:09.516042  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:09.516120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:09.556933  142150 cri.go:89] found id: ""
	I1212 01:05:09.556965  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.556977  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:09.556985  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:09.557050  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:09.593479  142150 cri.go:89] found id: ""
	I1212 01:05:09.593509  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.593520  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:09.593528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:09.593595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:09.633463  142150 cri.go:89] found id: ""
	I1212 01:05:09.633501  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.633513  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:09.633522  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:09.633583  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:09.666762  142150 cri.go:89] found id: ""
	I1212 01:05:09.666789  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.666798  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:09.666804  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:09.666871  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:09.704172  142150 cri.go:89] found id: ""
	I1212 01:05:09.704206  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.704217  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:09.704228  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:09.704288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:09.749679  142150 cri.go:89] found id: ""
	I1212 01:05:09.749708  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.749717  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:09.749724  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:09.749791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:09.789339  142150 cri.go:89] found id: ""
	I1212 01:05:09.789370  142150 logs.go:282] 0 containers: []
	W1212 01:05:09.789379  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:09.789388  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:09.789399  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:09.875218  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:09.875259  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:09.918042  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:09.918074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:09.971010  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:09.971052  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:09.985524  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:09.985553  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:10.059280  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:08.343968  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:10.844912  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:07.595940  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.596206  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.094527  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:09.707469  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.206124  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.206285  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:12.560353  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:12.573641  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:12.573719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:12.611903  142150 cri.go:89] found id: ""
	I1212 01:05:12.611931  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.611940  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:12.611947  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:12.612019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:12.647038  142150 cri.go:89] found id: ""
	I1212 01:05:12.647078  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.647090  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:12.647099  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:12.647188  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:12.684078  142150 cri.go:89] found id: ""
	I1212 01:05:12.684111  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.684123  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:12.684132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:12.684194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:12.720094  142150 cri.go:89] found id: ""
	I1212 01:05:12.720125  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.720137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:12.720145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:12.720208  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:12.762457  142150 cri.go:89] found id: ""
	I1212 01:05:12.762492  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.762504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:12.762512  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:12.762564  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:12.798100  142150 cri.go:89] found id: ""
	I1212 01:05:12.798131  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.798139  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:12.798145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:12.798195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:12.832455  142150 cri.go:89] found id: ""
	I1212 01:05:12.832486  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.832494  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:12.832501  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:12.832558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:12.866206  142150 cri.go:89] found id: ""
	I1212 01:05:12.866239  142150 logs.go:282] 0 containers: []
	W1212 01:05:12.866249  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:12.866258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:12.866273  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:12.918512  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:12.918550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:12.932506  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:12.932535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:13.011647  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:13.011670  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:13.011689  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:13.090522  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:13.090565  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:13.343045  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.343706  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:14.096430  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.097196  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:16.207697  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.707382  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:15.634171  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:15.648003  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:15.648067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:15.684747  142150 cri.go:89] found id: ""
	I1212 01:05:15.684780  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.684788  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:15.684795  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:15.684856  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:15.723209  142150 cri.go:89] found id: ""
	I1212 01:05:15.723236  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.723245  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:15.723252  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:15.723299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:15.761473  142150 cri.go:89] found id: ""
	I1212 01:05:15.761504  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.761513  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:15.761519  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:15.761588  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:15.795637  142150 cri.go:89] found id: ""
	I1212 01:05:15.795668  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.795677  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:15.795685  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:15.795735  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:15.835576  142150 cri.go:89] found id: ""
	I1212 01:05:15.835616  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.835628  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:15.835636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:15.835690  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:15.877331  142150 cri.go:89] found id: ""
	I1212 01:05:15.877359  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.877370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:15.877379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:15.877440  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:15.914225  142150 cri.go:89] found id: ""
	I1212 01:05:15.914255  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.914265  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:15.914271  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:15.914323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:15.949819  142150 cri.go:89] found id: ""
	I1212 01:05:15.949845  142150 logs.go:282] 0 containers: []
	W1212 01:05:15.949853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:15.949862  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:15.949877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:16.029950  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:16.029991  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:16.071065  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:16.071094  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:16.126731  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:16.126786  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:16.140774  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:16.140807  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:16.210269  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:18.710498  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:18.725380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:18.725462  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:18.762409  142150 cri.go:89] found id: ""
	I1212 01:05:18.762438  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.762446  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:18.762453  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:18.762501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:18.800308  142150 cri.go:89] found id: ""
	I1212 01:05:18.800336  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.800344  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:18.800351  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:18.800419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:18.834918  142150 cri.go:89] found id: ""
	I1212 01:05:18.834947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.834955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:18.834962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:18.835012  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:18.872434  142150 cri.go:89] found id: ""
	I1212 01:05:18.872470  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.872481  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:18.872490  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:18.872551  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:18.906919  142150 cri.go:89] found id: ""
	I1212 01:05:18.906947  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.906955  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:18.906962  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:18.907011  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:18.944626  142150 cri.go:89] found id: ""
	I1212 01:05:18.944661  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.944671  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:18.944677  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:18.944728  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:18.981196  142150 cri.go:89] found id: ""
	I1212 01:05:18.981224  142150 logs.go:282] 0 containers: []
	W1212 01:05:18.981233  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:18.981239  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:18.981290  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:19.017640  142150 cri.go:89] found id: ""
	I1212 01:05:19.017669  142150 logs.go:282] 0 containers: []
	W1212 01:05:19.017679  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:19.017691  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:19.017728  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:19.089551  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:19.089582  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:19.089602  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:19.176914  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:19.176958  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:19.223652  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:19.223694  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:19.281292  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:19.281353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:17.344863  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:19.348835  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:18.595465  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:20.708087  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:22.708298  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:21.797351  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:21.811040  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:21.811120  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:21.847213  142150 cri.go:89] found id: ""
	I1212 01:05:21.847242  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.847253  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:21.847261  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:21.847323  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:21.883925  142150 cri.go:89] found id: ""
	I1212 01:05:21.883952  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.883961  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:21.883967  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:21.884029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:21.925919  142150 cri.go:89] found id: ""
	I1212 01:05:21.925946  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.925955  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:21.925961  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:21.926025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:21.963672  142150 cri.go:89] found id: ""
	I1212 01:05:21.963708  142150 logs.go:282] 0 containers: []
	W1212 01:05:21.963719  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:21.963728  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:21.963794  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:22.000058  142150 cri.go:89] found id: ""
	I1212 01:05:22.000086  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.000094  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:22.000100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:22.000153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:22.036262  142150 cri.go:89] found id: ""
	I1212 01:05:22.036294  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.036305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:22.036314  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:22.036381  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:22.072312  142150 cri.go:89] found id: ""
	I1212 01:05:22.072348  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.072361  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:22.072369  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:22.072428  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:22.109376  142150 cri.go:89] found id: ""
	I1212 01:05:22.109406  142150 logs.go:282] 0 containers: []
	W1212 01:05:22.109413  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:22.109422  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:22.109436  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:22.183975  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:22.184006  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:22.184024  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:22.262037  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:22.262076  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:22.306902  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:22.306934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:22.361922  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:22.361964  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:24.877203  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:24.891749  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:24.891822  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:24.926934  142150 cri.go:89] found id: ""
	I1212 01:05:24.926974  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.926987  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:24.926997  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:24.927061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:24.961756  142150 cri.go:89] found id: ""
	I1212 01:05:24.961791  142150 logs.go:282] 0 containers: []
	W1212 01:05:24.961803  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:24.961812  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:24.961872  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:25.001414  142150 cri.go:89] found id: ""
	I1212 01:05:25.001449  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.001462  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:25.001470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:25.001536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:25.038398  142150 cri.go:89] found id: ""
	I1212 01:05:25.038429  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.038438  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:25.038443  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:25.038499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:25.074146  142150 cri.go:89] found id: ""
	I1212 01:05:25.074175  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.074184  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:25.074191  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:25.074266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:25.112259  142150 cri.go:89] found id: ""
	I1212 01:05:25.112287  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.112295  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:25.112303  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:25.112366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:25.148819  142150 cri.go:89] found id: ""
	I1212 01:05:25.148846  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.148853  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:25.148859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:25.148916  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:25.191229  142150 cri.go:89] found id: ""
	I1212 01:05:25.191262  142150 logs.go:282] 0 containers: []
	W1212 01:05:25.191274  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:25.191286  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:25.191298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:21.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:24.344442  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:26.344638  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:23.095266  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.096246  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.097041  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.208225  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:27.706184  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:25.280584  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:25.280641  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:25.325436  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:25.325473  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:25.380358  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:25.380406  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:25.394854  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:25.394889  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:25.474359  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:27.975286  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:27.989833  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:27.989893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:28.027211  142150 cri.go:89] found id: ""
	I1212 01:05:28.027242  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.027254  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:28.027262  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:28.027319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:28.063115  142150 cri.go:89] found id: ""
	I1212 01:05:28.063147  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.063158  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:28.063165  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:28.063226  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:28.121959  142150 cri.go:89] found id: ""
	I1212 01:05:28.121993  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.122006  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:28.122014  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:28.122074  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:28.161636  142150 cri.go:89] found id: ""
	I1212 01:05:28.161666  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.161674  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:28.161680  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:28.161745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:28.197581  142150 cri.go:89] found id: ""
	I1212 01:05:28.197615  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.197627  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:28.197636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:28.197704  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:28.234811  142150 cri.go:89] found id: ""
	I1212 01:05:28.234839  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.234849  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:28.234857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:28.234914  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:28.275485  142150 cri.go:89] found id: ""
	I1212 01:05:28.275510  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.275518  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:28.275524  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:28.275570  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:28.311514  142150 cri.go:89] found id: ""
	I1212 01:05:28.311551  142150 logs.go:282] 0 containers: []
	W1212 01:05:28.311562  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:28.311574  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:28.311608  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:28.362113  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:28.362153  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:28.376321  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:28.376353  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:28.460365  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:28.460394  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:28.460412  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:28.545655  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:28.545697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:28.850925  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.344959  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.595032  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.595989  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:29.706696  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:32.206728  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.206974  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:31.088684  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:31.103954  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:31.104033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:31.143436  142150 cri.go:89] found id: ""
	I1212 01:05:31.143468  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.143478  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:31.143488  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:31.143541  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:31.181127  142150 cri.go:89] found id: ""
	I1212 01:05:31.181162  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.181173  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:31.181181  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:31.181246  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:31.217764  142150 cri.go:89] found id: ""
	I1212 01:05:31.217794  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.217805  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:31.217812  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:31.217882  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:31.253648  142150 cri.go:89] found id: ""
	I1212 01:05:31.253674  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.253683  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:31.253690  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:31.253745  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:31.292365  142150 cri.go:89] found id: ""
	I1212 01:05:31.292393  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.292401  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:31.292407  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:31.292455  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:31.329834  142150 cri.go:89] found id: ""
	I1212 01:05:31.329866  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.329876  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:31.329883  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:31.329934  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:31.368679  142150 cri.go:89] found id: ""
	I1212 01:05:31.368712  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.368720  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:31.368726  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:31.368784  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:31.409003  142150 cri.go:89] found id: ""
	I1212 01:05:31.409028  142150 logs.go:282] 0 containers: []
	W1212 01:05:31.409036  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:31.409053  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:31.409068  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:31.462888  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:31.462927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:31.477975  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:31.478011  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:31.545620  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:31.545648  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:31.545665  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:31.626530  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:31.626570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.167917  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:34.183293  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:34.183372  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:34.219167  142150 cri.go:89] found id: ""
	I1212 01:05:34.219191  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.219200  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:34.219206  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:34.219265  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:34.254552  142150 cri.go:89] found id: ""
	I1212 01:05:34.254580  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.254588  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:34.254594  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:34.254645  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:34.289933  142150 cri.go:89] found id: ""
	I1212 01:05:34.289960  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.289969  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:34.289975  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:34.290027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:34.325468  142150 cri.go:89] found id: ""
	I1212 01:05:34.325497  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.325505  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:34.325510  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:34.325558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:34.364154  142150 cri.go:89] found id: ""
	I1212 01:05:34.364185  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.364197  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:34.364205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:34.364256  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:34.400516  142150 cri.go:89] found id: ""
	I1212 01:05:34.400546  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.400554  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:34.400559  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:34.400621  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:34.437578  142150 cri.go:89] found id: ""
	I1212 01:05:34.437608  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.437616  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:34.437622  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:34.437687  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:34.472061  142150 cri.go:89] found id: ""
	I1212 01:05:34.472094  142150 logs.go:282] 0 containers: []
	W1212 01:05:34.472105  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:34.472117  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:34.472135  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:34.526286  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:34.526340  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:34.610616  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:34.610664  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:34.625098  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:34.625130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:34.699706  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:34.699736  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:34.699759  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:33.844343  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.343847  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:34.096631  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.594963  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:36.707213  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:39.207473  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:37.282716  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:37.299415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:37.299486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:37.337783  142150 cri.go:89] found id: ""
	I1212 01:05:37.337820  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.337833  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:37.337842  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:37.337910  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:37.375491  142150 cri.go:89] found id: ""
	I1212 01:05:37.375526  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.375539  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:37.375547  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:37.375637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:37.417980  142150 cri.go:89] found id: ""
	I1212 01:05:37.418016  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.418028  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:37.418037  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:37.418115  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:37.454902  142150 cri.go:89] found id: ""
	I1212 01:05:37.454936  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.454947  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:37.454956  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:37.455029  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:37.492144  142150 cri.go:89] found id: ""
	I1212 01:05:37.492175  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.492188  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:37.492196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:37.492266  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:37.531054  142150 cri.go:89] found id: ""
	I1212 01:05:37.531085  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.531094  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:37.531100  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:37.531161  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:37.565127  142150 cri.go:89] found id: ""
	I1212 01:05:37.565169  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.565191  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:37.565209  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:37.565269  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:37.601233  142150 cri.go:89] found id: ""
	I1212 01:05:37.601273  142150 logs.go:282] 0 containers: []
	W1212 01:05:37.601286  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:37.601300  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:37.601315  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:37.652133  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:37.652172  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:37.666974  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:37.667007  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:37.744500  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:37.744527  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:37.744544  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:37.825572  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:37.825611  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:38.842756  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.845163  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:38.595482  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.595779  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:41.707367  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:44.206693  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:40.366883  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:40.380597  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:40.380662  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:40.417588  142150 cri.go:89] found id: ""
	I1212 01:05:40.417614  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.417623  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:40.417629  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:40.417681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:40.452506  142150 cri.go:89] found id: ""
	I1212 01:05:40.452535  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.452547  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:40.452555  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:40.452620  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:40.496623  142150 cri.go:89] found id: ""
	I1212 01:05:40.496657  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.496669  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:40.496681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:40.496755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:40.534202  142150 cri.go:89] found id: ""
	I1212 01:05:40.534241  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.534266  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:40.534277  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:40.534337  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:40.580317  142150 cri.go:89] found id: ""
	I1212 01:05:40.580346  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.580359  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:40.580367  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:40.580437  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:40.616814  142150 cri.go:89] found id: ""
	I1212 01:05:40.616842  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.616850  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:40.616857  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:40.616909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:40.653553  142150 cri.go:89] found id: ""
	I1212 01:05:40.653584  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.653593  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:40.653603  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:40.653667  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:40.687817  142150 cri.go:89] found id: ""
	I1212 01:05:40.687843  142150 logs.go:282] 0 containers: []
	W1212 01:05:40.687852  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:40.687862  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:40.687872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:40.739304  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:40.739343  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:40.753042  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:40.753074  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:40.820091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:40.820112  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:40.820126  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:40.903503  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:40.903561  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.446157  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:43.461289  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:43.461365  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:43.503352  142150 cri.go:89] found id: ""
	I1212 01:05:43.503385  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.503394  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:43.503402  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:43.503466  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:43.541576  142150 cri.go:89] found id: ""
	I1212 01:05:43.541610  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.541619  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:43.541626  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:43.541683  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:43.581255  142150 cri.go:89] found id: ""
	I1212 01:05:43.581285  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.581298  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:43.581305  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:43.581384  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:43.622081  142150 cri.go:89] found id: ""
	I1212 01:05:43.622114  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.622126  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:43.622135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:43.622201  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:43.657001  142150 cri.go:89] found id: ""
	I1212 01:05:43.657032  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.657041  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:43.657048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:43.657114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:43.691333  142150 cri.go:89] found id: ""
	I1212 01:05:43.691362  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.691370  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:43.691376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:43.691425  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:43.728745  142150 cri.go:89] found id: ""
	I1212 01:05:43.728779  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.728791  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:43.728799  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:43.728864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:43.764196  142150 cri.go:89] found id: ""
	I1212 01:05:43.764229  142150 logs.go:282] 0 containers: []
	W1212 01:05:43.764241  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:43.764253  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:43.764268  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:43.804433  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:43.804469  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:43.858783  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:43.858822  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:43.873582  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:43.873610  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:43.949922  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:43.949945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:43.949962  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:43.343827  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.346793  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:43.095993  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:45.096437  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.206828  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:48.708067  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:46.531390  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:46.546806  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:46.546881  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:46.583062  142150 cri.go:89] found id: ""
	I1212 01:05:46.583103  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.583116  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:46.583124  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:46.583187  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:46.621483  142150 cri.go:89] found id: ""
	I1212 01:05:46.621513  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.621524  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:46.621532  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:46.621595  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:46.658400  142150 cri.go:89] found id: ""
	I1212 01:05:46.658431  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.658440  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:46.658450  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:46.658520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:46.694368  142150 cri.go:89] found id: ""
	I1212 01:05:46.694393  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.694407  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:46.694413  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:46.694469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:46.733456  142150 cri.go:89] found id: ""
	I1212 01:05:46.733492  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.733504  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:46.733513  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:46.733574  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:46.767206  142150 cri.go:89] found id: ""
	I1212 01:05:46.767236  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.767248  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:46.767255  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:46.767317  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:46.803520  142150 cri.go:89] found id: ""
	I1212 01:05:46.803554  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.803564  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:46.803575  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:46.803657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:46.849563  142150 cri.go:89] found id: ""
	I1212 01:05:46.849590  142150 logs.go:282] 0 containers: []
	W1212 01:05:46.849597  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:46.849606  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:46.849618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:46.862800  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:46.862831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:46.931858  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:46.931883  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:46.931896  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:47.009125  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:47.009167  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.050830  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:47.050858  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.604639  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:49.618087  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:49.618153  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:49.653674  142150 cri.go:89] found id: ""
	I1212 01:05:49.653703  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.653712  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:49.653718  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:49.653772  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:49.688391  142150 cri.go:89] found id: ""
	I1212 01:05:49.688428  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.688439  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:49.688446  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:49.688516  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:49.729378  142150 cri.go:89] found id: ""
	I1212 01:05:49.729412  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.729423  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:49.729432  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:49.729492  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:49.765171  142150 cri.go:89] found id: ""
	I1212 01:05:49.765198  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.765206  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:49.765213  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:49.765260  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:49.800980  142150 cri.go:89] found id: ""
	I1212 01:05:49.801018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.801027  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:49.801034  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:49.801086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:49.836122  142150 cri.go:89] found id: ""
	I1212 01:05:49.836149  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.836161  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:49.836169  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:49.836235  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:49.873978  142150 cri.go:89] found id: ""
	I1212 01:05:49.874018  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.874027  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:49.874032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:49.874086  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:49.909709  142150 cri.go:89] found id: ""
	I1212 01:05:49.909741  142150 logs.go:282] 0 containers: []
	W1212 01:05:49.909754  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:49.909766  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:49.909783  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:49.963352  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:49.963394  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:49.977813  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:49.977841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:50.054423  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:50.054452  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:50.054470  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:50.133375  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:50.133416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:47.843200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:49.844564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:47.595931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:50.095312  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.096092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:51.206349  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:53.206853  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:52.673427  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:52.687196  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:52.687259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:52.725001  142150 cri.go:89] found id: ""
	I1212 01:05:52.725031  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.725039  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:52.725045  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:52.725110  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:52.760885  142150 cri.go:89] found id: ""
	I1212 01:05:52.760923  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.760934  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:52.760941  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:52.761025  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:52.798583  142150 cri.go:89] found id: ""
	I1212 01:05:52.798615  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.798627  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:52.798635  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:52.798700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:52.835957  142150 cri.go:89] found id: ""
	I1212 01:05:52.835983  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.835991  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:52.835998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:52.836065  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:52.876249  142150 cri.go:89] found id: ""
	I1212 01:05:52.876281  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.876292  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:52.876299  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:52.876397  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:52.911667  142150 cri.go:89] found id: ""
	I1212 01:05:52.911700  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.911712  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:52.911720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:52.911796  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:52.946781  142150 cri.go:89] found id: ""
	I1212 01:05:52.946808  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.946820  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:52.946827  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:52.946889  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:52.985712  142150 cri.go:89] found id: ""
	I1212 01:05:52.985740  142150 logs.go:282] 0 containers: []
	W1212 01:05:52.985752  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:52.985762  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:52.985778  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:53.038522  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:53.038563  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:53.052336  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:53.052382  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:53.132247  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:53.132280  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:53.132297  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:53.208823  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:53.208851  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:52.344518  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.344667  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:54.594738  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:56.595036  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.206990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:57.207827  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.208307  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:55.747479  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:55.760703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:55.760765  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:55.797684  142150 cri.go:89] found id: ""
	I1212 01:05:55.797720  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.797732  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:55.797740  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:55.797807  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:55.840900  142150 cri.go:89] found id: ""
	I1212 01:05:55.840933  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.840944  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:55.840953  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:55.841033  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:55.879098  142150 cri.go:89] found id: ""
	I1212 01:05:55.879131  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.879144  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:55.879152  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:55.879217  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:55.914137  142150 cri.go:89] found id: ""
	I1212 01:05:55.914166  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.914174  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:55.914181  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:55.914238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:55.950608  142150 cri.go:89] found id: ""
	I1212 01:05:55.950635  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.950644  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:55.950654  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:55.950705  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:55.992162  142150 cri.go:89] found id: ""
	I1212 01:05:55.992187  142150 logs.go:282] 0 containers: []
	W1212 01:05:55.992196  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:55.992202  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:55.992254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:56.028071  142150 cri.go:89] found id: ""
	I1212 01:05:56.028097  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.028105  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:56.028111  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:56.028164  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:56.063789  142150 cri.go:89] found id: ""
	I1212 01:05:56.063814  142150 logs.go:282] 0 containers: []
	W1212 01:05:56.063822  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:56.063832  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:56.063844  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:56.118057  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:56.118096  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.132908  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:56.132939  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:56.200923  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:56.200951  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:56.200971  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:56.283272  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:56.283321  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:58.825548  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:05:58.839298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:05:58.839368  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:05:58.874249  142150 cri.go:89] found id: ""
	I1212 01:05:58.874289  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.874301  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:05:58.874313  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:05:58.874391  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:05:58.909238  142150 cri.go:89] found id: ""
	I1212 01:05:58.909273  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.909286  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:05:58.909294  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:05:58.909359  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:05:58.945112  142150 cri.go:89] found id: ""
	I1212 01:05:58.945139  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.945146  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:05:58.945154  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:05:58.945203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:05:58.981101  142150 cri.go:89] found id: ""
	I1212 01:05:58.981153  142150 logs.go:282] 0 containers: []
	W1212 01:05:58.981168  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:05:58.981176  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:05:58.981241  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:05:59.015095  142150 cri.go:89] found id: ""
	I1212 01:05:59.015135  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.015147  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:05:59.015158  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:05:59.015224  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:05:59.051606  142150 cri.go:89] found id: ""
	I1212 01:05:59.051640  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.051650  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:05:59.051659  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:05:59.051719  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:05:59.088125  142150 cri.go:89] found id: ""
	I1212 01:05:59.088153  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.088161  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:05:59.088166  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:05:59.088223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:05:59.127803  142150 cri.go:89] found id: ""
	I1212 01:05:59.127829  142150 logs.go:282] 0 containers: []
	W1212 01:05:59.127841  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:05:59.127853  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:05:59.127871  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:05:59.204831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:05:59.204857  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:05:59.204872  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:05:59.285346  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:05:59.285387  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:05:59.324194  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:05:59.324233  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:05:59.378970  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:05:59.379022  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:05:56.845550  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:59.344473  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:05:58.595556  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:00.595723  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.706748  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.709131  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:01.893635  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:01.907481  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:01.907606  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:01.949985  142150 cri.go:89] found id: ""
	I1212 01:06:01.950022  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.950035  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:01.950043  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:01.950112  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:01.986884  142150 cri.go:89] found id: ""
	I1212 01:06:01.986914  142150 logs.go:282] 0 containers: []
	W1212 01:06:01.986923  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:01.986928  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:01.986994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:02.025010  142150 cri.go:89] found id: ""
	I1212 01:06:02.025044  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.025056  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:02.025063  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:02.025137  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:02.061300  142150 cri.go:89] found id: ""
	I1212 01:06:02.061340  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.061352  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:02.061361  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:02.061427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:02.098627  142150 cri.go:89] found id: ""
	I1212 01:06:02.098667  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.098677  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:02.098684  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:02.098744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:02.137005  142150 cri.go:89] found id: ""
	I1212 01:06:02.137030  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.137038  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:02.137044  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:02.137104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:02.172052  142150 cri.go:89] found id: ""
	I1212 01:06:02.172086  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.172096  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:02.172102  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:02.172154  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:02.207721  142150 cri.go:89] found id: ""
	I1212 01:06:02.207750  142150 logs.go:282] 0 containers: []
	W1212 01:06:02.207761  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:02.207771  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:02.207787  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:02.221576  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:02.221605  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:02.291780  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:02.291812  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:02.291826  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:02.376553  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:02.376595  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:02.418407  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:02.418446  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:04.973347  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:04.988470  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:04.988545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:05.024045  142150 cri.go:89] found id: ""
	I1212 01:06:05.024076  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.024085  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:05.024092  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:05.024149  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:05.060055  142150 cri.go:89] found id: ""
	I1212 01:06:05.060079  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.060089  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:05.060095  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:05.060145  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:05.097115  142150 cri.go:89] found id: ""
	I1212 01:06:05.097142  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.097152  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:05.097160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:05.097220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:05.133941  142150 cri.go:89] found id: ""
	I1212 01:06:05.133976  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.133990  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:05.133998  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:05.134063  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:05.169157  142150 cri.go:89] found id: ""
	I1212 01:06:05.169185  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.169193  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:05.169200  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:05.169253  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:05.206434  142150 cri.go:89] found id: ""
	I1212 01:06:05.206464  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.206475  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:05.206484  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:05.206546  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:01.842981  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:03.843341  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.843811  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:02.597066  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:04.597793  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:07.095874  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:06.206955  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:08.208809  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:05.248363  142150 cri.go:89] found id: ""
	I1212 01:06:05.248397  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.248409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:05.248417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:05.248485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:05.284898  142150 cri.go:89] found id: ""
	I1212 01:06:05.284932  142150 logs.go:282] 0 containers: []
	W1212 01:06:05.284945  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:05.284958  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:05.284974  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:05.362418  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:05.362445  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:05.362464  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:05.446289  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:05.446349  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:05.487075  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:05.487107  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:05.542538  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:05.542582  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.057586  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:08.070959  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:08.071019  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:08.109906  142150 cri.go:89] found id: ""
	I1212 01:06:08.109936  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.109945  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:08.109951  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:08.110005  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:08.145130  142150 cri.go:89] found id: ""
	I1212 01:06:08.145159  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.145168  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:08.145175  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:08.145223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:08.183454  142150 cri.go:89] found id: ""
	I1212 01:06:08.183485  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.183496  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:08.183504  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:08.183573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:08.218728  142150 cri.go:89] found id: ""
	I1212 01:06:08.218752  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.218763  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:08.218772  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:08.218835  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:08.256230  142150 cri.go:89] found id: ""
	I1212 01:06:08.256263  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.256274  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:08.256283  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:08.256345  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:08.294179  142150 cri.go:89] found id: ""
	I1212 01:06:08.294209  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.294221  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:08.294229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:08.294293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:08.335793  142150 cri.go:89] found id: ""
	I1212 01:06:08.335822  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.335835  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:08.335843  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:08.335905  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:08.387704  142150 cri.go:89] found id: ""
	I1212 01:06:08.387734  142150 logs.go:282] 0 containers: []
	W1212 01:06:08.387746  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:08.387757  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:08.387773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:08.465260  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:08.465307  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:08.508088  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:08.508129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:08.558617  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:08.558655  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:08.573461  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:08.573489  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:08.649664  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:07.844408  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.343200  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:09.595982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:12.094513  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:10.708379  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:13.207302  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:11.150614  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:11.164991  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:11.165062  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:11.201977  142150 cri.go:89] found id: ""
	I1212 01:06:11.202011  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.202045  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:11.202055  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:11.202124  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:11.243638  142150 cri.go:89] found id: ""
	I1212 01:06:11.243667  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.243676  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:11.243682  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:11.243742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:11.279577  142150 cri.go:89] found id: ""
	I1212 01:06:11.279621  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.279634  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:11.279642  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:11.279709  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:11.317344  142150 cri.go:89] found id: ""
	I1212 01:06:11.317378  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.317386  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:11.317392  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:11.317457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:11.358331  142150 cri.go:89] found id: ""
	I1212 01:06:11.358361  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.358373  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:11.358381  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:11.358439  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:11.393884  142150 cri.go:89] found id: ""
	I1212 01:06:11.393911  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.393919  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:11.393926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:11.393974  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:11.433243  142150 cri.go:89] found id: ""
	I1212 01:06:11.433290  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.433302  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:11.433310  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:11.433374  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:11.478597  142150 cri.go:89] found id: ""
	I1212 01:06:11.478625  142150 logs.go:282] 0 containers: []
	W1212 01:06:11.478637  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:11.478650  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:11.478667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:11.528096  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:11.528133  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:11.542118  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:11.542149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:11.612414  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:11.612435  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:11.612451  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:11.689350  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:11.689389  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.230677  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:14.245866  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:14.245970  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:14.283451  142150 cri.go:89] found id: ""
	I1212 01:06:14.283487  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.283495  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:14.283502  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:14.283552  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:14.318812  142150 cri.go:89] found id: ""
	I1212 01:06:14.318840  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.318848  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:14.318855  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:14.318904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:14.356489  142150 cri.go:89] found id: ""
	I1212 01:06:14.356519  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.356527  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:14.356533  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:14.356590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:14.394224  142150 cri.go:89] found id: ""
	I1212 01:06:14.394260  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.394271  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:14.394279  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:14.394350  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:14.432440  142150 cri.go:89] found id: ""
	I1212 01:06:14.432467  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.432480  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:14.432488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:14.432540  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:14.469777  142150 cri.go:89] found id: ""
	I1212 01:06:14.469822  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.469835  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:14.469844  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:14.469904  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:14.504830  142150 cri.go:89] found id: ""
	I1212 01:06:14.504860  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.504872  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:14.504881  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:14.504941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:14.539399  142150 cri.go:89] found id: ""
	I1212 01:06:14.539423  142150 logs.go:282] 0 containers: []
	W1212 01:06:14.539432  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:14.539441  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:14.539454  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:14.552716  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:14.552749  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:14.628921  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:14.628945  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:14.628959  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:14.707219  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:14.707255  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:14.765953  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:14.765986  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:12.343941  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.843333  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:14.095296  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:16.596411  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:15.706990  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.707150  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:17.324233  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:17.337428  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:17.337499  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:17.374493  142150 cri.go:89] found id: ""
	I1212 01:06:17.374526  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.374538  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:17.374547  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:17.374616  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:17.408494  142150 cri.go:89] found id: ""
	I1212 01:06:17.408519  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.408527  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:17.408535  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:17.408582  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:17.452362  142150 cri.go:89] found id: ""
	I1212 01:06:17.452389  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.452397  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:17.452403  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:17.452456  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:17.493923  142150 cri.go:89] found id: ""
	I1212 01:06:17.493957  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.493968  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:17.493976  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:17.494037  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:17.529519  142150 cri.go:89] found id: ""
	I1212 01:06:17.529548  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.529556  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:17.529562  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:17.529610  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:17.570272  142150 cri.go:89] found id: ""
	I1212 01:06:17.570297  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.570305  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:17.570312  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:17.570361  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:17.609326  142150 cri.go:89] found id: ""
	I1212 01:06:17.609360  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.609371  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:17.609379  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:17.609470  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:17.642814  142150 cri.go:89] found id: ""
	I1212 01:06:17.642844  142150 logs.go:282] 0 containers: []
	W1212 01:06:17.642853  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:17.642863  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:17.642875  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:17.656476  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:17.656510  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:17.726997  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:17.727024  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:17.727039  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:17.803377  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:17.803424  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:17.851190  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:17.851222  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:17.344804  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.347642  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.096235  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.594712  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:19.707303  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:21.707482  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:24.208937  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:20.406953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:20.420410  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:20.420484  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:20.462696  142150 cri.go:89] found id: ""
	I1212 01:06:20.462733  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.462744  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:20.462752  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:20.462815  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:20.522881  142150 cri.go:89] found id: ""
	I1212 01:06:20.522906  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.522915  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:20.522921  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:20.522979  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:20.575876  142150 cri.go:89] found id: ""
	I1212 01:06:20.575917  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.575928  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:20.575936  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:20.576003  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:20.627875  142150 cri.go:89] found id: ""
	I1212 01:06:20.627907  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.627919  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:20.627926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:20.627976  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:20.668323  142150 cri.go:89] found id: ""
	I1212 01:06:20.668353  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.668365  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:20.668372  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:20.668441  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:20.705907  142150 cri.go:89] found id: ""
	I1212 01:06:20.705942  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.705954  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:20.705963  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:20.706023  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:20.740221  142150 cri.go:89] found id: ""
	I1212 01:06:20.740249  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.740257  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:20.740263  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:20.740328  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:20.780346  142150 cri.go:89] found id: ""
	I1212 01:06:20.780372  142150 logs.go:282] 0 containers: []
	W1212 01:06:20.780380  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:20.780390  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:20.780407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:20.837660  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:20.837699  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:20.852743  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:20.852775  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:20.928353  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:20.928385  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:20.928401  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:21.009919  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:21.009961  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:23.553897  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:23.568667  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:23.568742  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:23.607841  142150 cri.go:89] found id: ""
	I1212 01:06:23.607873  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.607884  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:23.607891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:23.607945  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:23.645461  142150 cri.go:89] found id: ""
	I1212 01:06:23.645494  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.645505  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:23.645513  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:23.645578  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:23.681140  142150 cri.go:89] found id: ""
	I1212 01:06:23.681165  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.681174  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:23.681180  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:23.681230  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:23.718480  142150 cri.go:89] found id: ""
	I1212 01:06:23.718515  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.718526  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:23.718534  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:23.718602  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:23.760206  142150 cri.go:89] found id: ""
	I1212 01:06:23.760235  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.760243  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:23.760249  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:23.760302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:23.797384  142150 cri.go:89] found id: ""
	I1212 01:06:23.797417  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.797431  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:23.797439  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:23.797496  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:23.830608  142150 cri.go:89] found id: ""
	I1212 01:06:23.830639  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.830650  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:23.830658  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:23.830722  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:23.867481  142150 cri.go:89] found id: ""
	I1212 01:06:23.867509  142150 logs.go:282] 0 containers: []
	W1212 01:06:23.867522  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:23.867534  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:23.867551  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:23.922529  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:23.922579  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:23.936763  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:23.936794  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:24.004371  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:24.004398  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:24.004413  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:24.083097  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:24.083136  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:21.842975  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.845498  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.343574  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:23.596224  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.094625  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.707610  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:29.208425  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:26.633394  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:26.646898  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:26.646977  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:26.680382  142150 cri.go:89] found id: ""
	I1212 01:06:26.680411  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.680421  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:26.680427  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:26.680475  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:26.716948  142150 cri.go:89] found id: ""
	I1212 01:06:26.716982  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.716994  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:26.717001  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:26.717090  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:26.753141  142150 cri.go:89] found id: ""
	I1212 01:06:26.753168  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.753176  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:26.753182  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:26.753231  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:26.791025  142150 cri.go:89] found id: ""
	I1212 01:06:26.791056  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.791068  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:26.791074  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:26.791130  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:26.829914  142150 cri.go:89] found id: ""
	I1212 01:06:26.829952  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.829965  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:26.829973  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:26.830046  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:26.865990  142150 cri.go:89] found id: ""
	I1212 01:06:26.866022  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.866045  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:26.866053  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:26.866133  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:26.906007  142150 cri.go:89] found id: ""
	I1212 01:06:26.906040  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.906052  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:26.906060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:26.906141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:26.946004  142150 cri.go:89] found id: ""
	I1212 01:06:26.946038  142150 logs.go:282] 0 containers: []
	W1212 01:06:26.946048  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:26.946057  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:26.946073  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:27.018967  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:27.018996  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:27.019013  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:27.100294  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:27.100334  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:27.141147  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:27.141190  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:27.193161  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:27.193200  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:29.709616  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:29.723336  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:29.723413  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:29.769938  142150 cri.go:89] found id: ""
	I1212 01:06:29.769966  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.769977  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:29.769985  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:29.770048  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:29.809109  142150 cri.go:89] found id: ""
	I1212 01:06:29.809147  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.809160  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:29.809168  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:29.809229  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:29.845444  142150 cri.go:89] found id: ""
	I1212 01:06:29.845471  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.845481  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:29.845488  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:29.845548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:29.882109  142150 cri.go:89] found id: ""
	I1212 01:06:29.882138  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.882147  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:29.882153  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:29.882203  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:29.928731  142150 cri.go:89] found id: ""
	I1212 01:06:29.928764  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.928777  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:29.928785  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:29.928849  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:29.972994  142150 cri.go:89] found id: ""
	I1212 01:06:29.973026  142150 logs.go:282] 0 containers: []
	W1212 01:06:29.973041  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:29.973048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:29.973098  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:30.009316  142150 cri.go:89] found id: ""
	I1212 01:06:30.009349  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.009357  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:30.009363  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:30.009422  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:30.043082  142150 cri.go:89] found id: ""
	I1212 01:06:30.043111  142150 logs.go:282] 0 containers: []
	W1212 01:06:30.043122  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:30.043134  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:30.043149  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:30.097831  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:30.097866  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:30.112873  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:30.112906  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:30.187035  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:30.187061  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:30.187081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:28.843986  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.343502  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:28.096043  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.594875  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:31.707976  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:34.208061  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:30.273106  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:30.273155  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:32.819179  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:32.833486  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:32.833555  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:32.872579  142150 cri.go:89] found id: ""
	I1212 01:06:32.872622  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.872631  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:32.872645  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:32.872700  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:32.909925  142150 cri.go:89] found id: ""
	I1212 01:06:32.909958  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.909970  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:32.909979  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:32.910053  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:32.949085  142150 cri.go:89] found id: ""
	I1212 01:06:32.949116  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.949127  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:32.949135  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:32.949197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:32.985755  142150 cri.go:89] found id: ""
	I1212 01:06:32.985782  142150 logs.go:282] 0 containers: []
	W1212 01:06:32.985790  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:32.985796  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:32.985845  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:33.028340  142150 cri.go:89] found id: ""
	I1212 01:06:33.028367  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.028374  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:33.028380  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:33.028432  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:33.064254  142150 cri.go:89] found id: ""
	I1212 01:06:33.064283  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.064292  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:33.064298  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:33.064349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:33.099905  142150 cri.go:89] found id: ""
	I1212 01:06:33.099936  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.099943  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:33.099949  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:33.100008  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:33.137958  142150 cri.go:89] found id: ""
	I1212 01:06:33.137993  142150 logs.go:282] 0 containers: []
	W1212 01:06:33.138004  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:33.138016  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:33.138034  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:33.190737  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:33.190776  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:33.205466  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:33.205502  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:33.278815  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:33.278844  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:33.278863  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:33.357387  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:33.357429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:33.843106  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.344148  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:33.095175  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.095369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:37.095797  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:36.707296  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.207875  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:35.898317  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:35.913832  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:35.913907  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:35.950320  142150 cri.go:89] found id: ""
	I1212 01:06:35.950345  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.950353  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:35.950359  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:35.950407  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:35.989367  142150 cri.go:89] found id: ""
	I1212 01:06:35.989394  142150 logs.go:282] 0 containers: []
	W1212 01:06:35.989403  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:35.989409  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:35.989457  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:36.024118  142150 cri.go:89] found id: ""
	I1212 01:06:36.024148  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.024155  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:36.024163  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:36.024221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:36.059937  142150 cri.go:89] found id: ""
	I1212 01:06:36.059966  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.059974  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:36.059980  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:36.060030  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:36.096897  142150 cri.go:89] found id: ""
	I1212 01:06:36.096921  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.096933  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:36.096941  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:36.096994  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:36.134387  142150 cri.go:89] found id: ""
	I1212 01:06:36.134412  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.134420  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:36.134426  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:36.134490  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:36.177414  142150 cri.go:89] found id: ""
	I1212 01:06:36.177452  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.177464  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:36.177471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:36.177533  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:36.221519  142150 cri.go:89] found id: ""
	I1212 01:06:36.221553  142150 logs.go:282] 0 containers: []
	W1212 01:06:36.221563  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:36.221575  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:36.221590  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:36.234862  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:36.234891  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:36.314361  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:36.314391  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:36.314407  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:36.398283  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:36.398328  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:36.441441  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:36.441481  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:38.995369  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:39.009149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:39.009221  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:39.044164  142150 cri.go:89] found id: ""
	I1212 01:06:39.044194  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.044204  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:39.044210  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:39.044259  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:39.080145  142150 cri.go:89] found id: ""
	I1212 01:06:39.080180  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.080191  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:39.080197  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:39.080254  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:39.119128  142150 cri.go:89] found id: ""
	I1212 01:06:39.119156  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.119167  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:39.119174  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:39.119240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:39.157444  142150 cri.go:89] found id: ""
	I1212 01:06:39.157476  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.157487  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:39.157495  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:39.157562  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:39.191461  142150 cri.go:89] found id: ""
	I1212 01:06:39.191486  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.191497  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:39.191505  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:39.191573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:39.227742  142150 cri.go:89] found id: ""
	I1212 01:06:39.227769  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.227777  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:39.227783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:39.227832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:39.268207  142150 cri.go:89] found id: ""
	I1212 01:06:39.268239  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.268251  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:39.268259  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:39.268319  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:39.304054  142150 cri.go:89] found id: ""
	I1212 01:06:39.304092  142150 logs.go:282] 0 containers: []
	W1212 01:06:39.304103  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:39.304115  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:39.304128  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:39.381937  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:39.381979  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:39.421824  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:39.421864  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:39.475968  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:39.476020  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:39.491398  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:39.491429  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:39.568463  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:38.844240  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.343589  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:39.594883  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.594919  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:41.707035  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.707860  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:42.068594  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:42.082041  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:42.082123  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:42.121535  142150 cri.go:89] found id: ""
	I1212 01:06:42.121562  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.121570  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:42.121577  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:42.121627  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:42.156309  142150 cri.go:89] found id: ""
	I1212 01:06:42.156341  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.156350  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:42.156364  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:42.156427  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:42.190111  142150 cri.go:89] found id: ""
	I1212 01:06:42.190137  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.190145  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:42.190151  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:42.190209  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:42.225424  142150 cri.go:89] found id: ""
	I1212 01:06:42.225452  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.225461  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:42.225468  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:42.225526  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:42.260519  142150 cri.go:89] found id: ""
	I1212 01:06:42.260552  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.260564  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:42.260576  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:42.260644  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:42.296987  142150 cri.go:89] found id: ""
	I1212 01:06:42.297017  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.297028  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:42.297036  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:42.297109  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:42.331368  142150 cri.go:89] found id: ""
	I1212 01:06:42.331400  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.331409  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:42.331415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:42.331482  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:42.367010  142150 cri.go:89] found id: ""
	I1212 01:06:42.367051  142150 logs.go:282] 0 containers: []
	W1212 01:06:42.367062  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:42.367075  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:42.367093  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:42.381264  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:42.381299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:42.452831  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:42.452856  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:42.452877  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:42.531965  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:42.532006  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:42.571718  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:42.571757  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.128570  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:45.142897  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:45.142969  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:45.186371  142150 cri.go:89] found id: ""
	I1212 01:06:45.186404  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.186412  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:45.186418  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:45.186468  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:45.224085  142150 cri.go:89] found id: ""
	I1212 01:06:45.224115  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.224123  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:45.224129  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:45.224195  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:43.346470  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.845269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:43.595640  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.596624  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.708204  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:48.206947  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:45.258477  142150 cri.go:89] found id: ""
	I1212 01:06:45.258510  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.258522  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:45.258530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:45.258590  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:45.293091  142150 cri.go:89] found id: ""
	I1212 01:06:45.293125  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.293137  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:45.293145  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:45.293211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:45.331275  142150 cri.go:89] found id: ""
	I1212 01:06:45.331314  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.331325  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:45.331332  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:45.331400  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:45.374915  142150 cri.go:89] found id: ""
	I1212 01:06:45.374943  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.374956  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:45.374965  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:45.375027  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:45.415450  142150 cri.go:89] found id: ""
	I1212 01:06:45.415480  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.415489  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:45.415496  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:45.415548  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:45.454407  142150 cri.go:89] found id: ""
	I1212 01:06:45.454431  142150 logs.go:282] 0 containers: []
	W1212 01:06:45.454439  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:45.454449  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:45.454460  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:45.508573  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:45.508612  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:45.524049  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:45.524085  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:45.593577  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:45.593602  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:45.593618  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:45.678581  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:45.678620  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.221523  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:48.235146  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:48.235212  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:48.271845  142150 cri.go:89] found id: ""
	I1212 01:06:48.271875  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.271885  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:48.271891  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:48.271944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:48.308558  142150 cri.go:89] found id: ""
	I1212 01:06:48.308589  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.308602  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:48.308610  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:48.308673  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:48.346395  142150 cri.go:89] found id: ""
	I1212 01:06:48.346423  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.346434  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:48.346440  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:48.346501  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:48.381505  142150 cri.go:89] found id: ""
	I1212 01:06:48.381536  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.381548  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:48.381555  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:48.381617  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:48.417829  142150 cri.go:89] found id: ""
	I1212 01:06:48.417859  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.417871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:48.417878  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:48.417944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:48.453476  142150 cri.go:89] found id: ""
	I1212 01:06:48.453508  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.453519  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:48.453528  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:48.453592  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:48.490500  142150 cri.go:89] found id: ""
	I1212 01:06:48.490531  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.490541  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:48.490547  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:48.490597  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:48.527492  142150 cri.go:89] found id: ""
	I1212 01:06:48.527520  142150 logs.go:282] 0 containers: []
	W1212 01:06:48.527529  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:48.527539  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:48.527550  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:48.570458  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:48.570499  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:48.623986  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:48.624031  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:48.638363  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:48.638392  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:48.709373  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:48.709400  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:48.709416  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:48.344831  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.345010  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:47.596708  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.094517  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:52.094931  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:50.706903  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:53.207824  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:51.291629  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:51.305060  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:51.305140  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:51.340368  142150 cri.go:89] found id: ""
	I1212 01:06:51.340394  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.340404  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:51.340411  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:51.340489  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:51.381421  142150 cri.go:89] found id: ""
	I1212 01:06:51.381453  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.381466  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:51.381474  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:51.381536  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:51.421482  142150 cri.go:89] found id: ""
	I1212 01:06:51.421518  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.421530  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:51.421538  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:51.421605  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:51.457190  142150 cri.go:89] found id: ""
	I1212 01:06:51.457217  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.457227  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:51.457236  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:51.457302  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:51.496149  142150 cri.go:89] found id: ""
	I1212 01:06:51.496184  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.496196  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:51.496205  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:51.496270  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:51.529779  142150 cri.go:89] found id: ""
	I1212 01:06:51.529809  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.529820  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:51.529826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:51.529893  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:51.568066  142150 cri.go:89] found id: ""
	I1212 01:06:51.568105  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.568118  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:51.568126  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:51.568197  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:51.605556  142150 cri.go:89] found id: ""
	I1212 01:06:51.605593  142150 logs.go:282] 0 containers: []
	W1212 01:06:51.605605  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:51.605616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:51.605632  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:51.680531  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:51.680570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:51.727663  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:51.727697  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:51.780013  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:51.780053  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:51.794203  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:51.794232  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:51.869407  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.369854  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:54.383539  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:54.383625  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:54.418536  142150 cri.go:89] found id: ""
	I1212 01:06:54.418574  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.418586  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:54.418594  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:54.418657  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:54.454485  142150 cri.go:89] found id: ""
	I1212 01:06:54.454515  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.454523  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:54.454531  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:54.454581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:54.494254  142150 cri.go:89] found id: ""
	I1212 01:06:54.494284  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.494296  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:54.494304  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:54.494366  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:54.532727  142150 cri.go:89] found id: ""
	I1212 01:06:54.532757  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.532768  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:54.532776  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:54.532862  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:54.569817  142150 cri.go:89] found id: ""
	I1212 01:06:54.569845  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.569856  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:54.569864  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:54.569927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:54.628530  142150 cri.go:89] found id: ""
	I1212 01:06:54.628564  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.628577  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:54.628585  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:54.628635  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:54.666761  142150 cri.go:89] found id: ""
	I1212 01:06:54.666792  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.666801  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:54.666808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:54.666879  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:54.703699  142150 cri.go:89] found id: ""
	I1212 01:06:54.703726  142150 logs.go:282] 0 containers: []
	W1212 01:06:54.703737  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:54.703749  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:54.703764  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:54.754635  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:54.754672  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:54.769112  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:54.769143  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:54.845563  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:54.845580  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:54.845591  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:54.922651  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:54.922690  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:52.843114  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.845370  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:54.095381  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:56.097745  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:55.207916  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.708907  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:57.467454  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:06:57.480673  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:06:57.480769  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:06:57.517711  142150 cri.go:89] found id: ""
	I1212 01:06:57.517737  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.517745  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:06:57.517751  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:06:57.517813  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:06:57.552922  142150 cri.go:89] found id: ""
	I1212 01:06:57.552948  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.552956  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:06:57.552977  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:06:57.553061  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:06:57.589801  142150 cri.go:89] found id: ""
	I1212 01:06:57.589827  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.589839  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:06:57.589845  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:06:57.589909  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:06:57.626088  142150 cri.go:89] found id: ""
	I1212 01:06:57.626123  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.626135  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:06:57.626142  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:06:57.626211  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:06:57.661228  142150 cri.go:89] found id: ""
	I1212 01:06:57.661261  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.661273  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:06:57.661281  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:06:57.661344  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:06:57.699523  142150 cri.go:89] found id: ""
	I1212 01:06:57.699551  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.699559  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:06:57.699565  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:06:57.699641  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:06:57.739000  142150 cri.go:89] found id: ""
	I1212 01:06:57.739032  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.739043  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:06:57.739051  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:06:57.739128  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:06:57.776691  142150 cri.go:89] found id: ""
	I1212 01:06:57.776723  142150 logs.go:282] 0 containers: []
	W1212 01:06:57.776732  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:06:57.776743  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:06:57.776767  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:06:57.828495  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:06:57.828535  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:06:57.843935  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:06:57.843970  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:06:57.916420  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:06:57.916446  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:06:57.916463  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:06:57.994107  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:06:57.994158  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:06:57.344917  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:59.844269  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:06:58.595415  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:01.095794  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.208708  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:02.707173  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:00.540646  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:00.554032  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:00.554141  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:00.590815  142150 cri.go:89] found id: ""
	I1212 01:07:00.590843  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.590852  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:00.590858  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:00.590919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:00.627460  142150 cri.go:89] found id: ""
	I1212 01:07:00.627494  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.627507  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:00.627515  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:00.627586  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:00.667429  142150 cri.go:89] found id: ""
	I1212 01:07:00.667472  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.667484  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:00.667494  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:00.667558  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:00.713026  142150 cri.go:89] found id: ""
	I1212 01:07:00.713053  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.713060  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:00.713067  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:00.713129  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:00.748218  142150 cri.go:89] found id: ""
	I1212 01:07:00.748251  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.748264  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:00.748272  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:00.748325  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:00.786287  142150 cri.go:89] found id: ""
	I1212 01:07:00.786314  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.786322  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:00.786331  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:00.786389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:00.822957  142150 cri.go:89] found id: ""
	I1212 01:07:00.822986  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.822999  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:00.823007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:00.823081  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:00.862310  142150 cri.go:89] found id: ""
	I1212 01:07:00.862342  142150 logs.go:282] 0 containers: []
	W1212 01:07:00.862354  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:00.862368  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:00.862385  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:00.930308  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:00.930343  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:00.930360  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:01.013889  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:01.013934  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:01.064305  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:01.064342  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:01.133631  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:01.133678  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:03.648853  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:03.663287  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:03.663349  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:03.700723  142150 cri.go:89] found id: ""
	I1212 01:07:03.700754  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.700766  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:03.700774  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:03.700840  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:03.741025  142150 cri.go:89] found id: ""
	I1212 01:07:03.741054  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.741065  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:03.741073  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:03.741147  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:03.782877  142150 cri.go:89] found id: ""
	I1212 01:07:03.782914  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.782927  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:03.782935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:03.782998  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:03.819227  142150 cri.go:89] found id: ""
	I1212 01:07:03.819272  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.819285  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:03.819292  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:03.819341  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:03.856660  142150 cri.go:89] found id: ""
	I1212 01:07:03.856687  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.856695  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:03.856701  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:03.856750  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:03.893368  142150 cri.go:89] found id: ""
	I1212 01:07:03.893400  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.893410  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:03.893417  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:03.893469  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:03.929239  142150 cri.go:89] found id: ""
	I1212 01:07:03.929267  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.929275  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:03.929282  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:03.929335  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:03.963040  142150 cri.go:89] found id: ""
	I1212 01:07:03.963077  142150 logs.go:282] 0 containers: []
	W1212 01:07:03.963089  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:03.963113  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:03.963129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:04.040119  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:04.040147  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:04.040161  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:04.122230  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:04.122269  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:04.163266  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:04.163298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:04.218235  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:04.218271  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:02.342899  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:04.343072  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.344552  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:03.596239  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.094842  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:05.206813  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:07.209422  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:06.732405  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:06.748171  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:06.748278  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:06.792828  142150 cri.go:89] found id: ""
	I1212 01:07:06.792853  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.792861  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:06.792868  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:06.792929  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:06.851440  142150 cri.go:89] found id: ""
	I1212 01:07:06.851472  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.851483  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:06.851490  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:06.851556  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:06.894850  142150 cri.go:89] found id: ""
	I1212 01:07:06.894879  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.894887  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:06.894893  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:06.894944  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:06.931153  142150 cri.go:89] found id: ""
	I1212 01:07:06.931188  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.931199  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:06.931206  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:06.931271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:06.966835  142150 cri.go:89] found id: ""
	I1212 01:07:06.966862  142150 logs.go:282] 0 containers: []
	W1212 01:07:06.966871  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:06.966877  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:06.966939  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:07.004810  142150 cri.go:89] found id: ""
	I1212 01:07:07.004839  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.004848  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:07.004854  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:07.004912  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:07.042641  142150 cri.go:89] found id: ""
	I1212 01:07:07.042679  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.042691  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:07.042699  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:07.042764  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:07.076632  142150 cri.go:89] found id: ""
	I1212 01:07:07.076659  142150 logs.go:282] 0 containers: []
	W1212 01:07:07.076668  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:07.076678  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:07.076692  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:07.136796  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:07.136841  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:07.153797  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:07.153831  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:07.231995  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:07.232025  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:07.232042  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:07.319913  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:07.319950  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:09.862898  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:09.878554  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:09.878640  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:09.914747  142150 cri.go:89] found id: ""
	I1212 01:07:09.914782  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.914795  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:09.914803  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:09.914864  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:09.949960  142150 cri.go:89] found id: ""
	I1212 01:07:09.949998  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.950019  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:09.950027  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:09.950084  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:09.989328  142150 cri.go:89] found id: ""
	I1212 01:07:09.989368  142150 logs.go:282] 0 containers: []
	W1212 01:07:09.989380  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:09.989388  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:09.989454  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:10.024352  142150 cri.go:89] found id: ""
	I1212 01:07:10.024382  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.024390  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:10.024397  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:10.024446  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:10.058429  142150 cri.go:89] found id: ""
	I1212 01:07:10.058459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.058467  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:10.058473  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:10.058524  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:10.095183  142150 cri.go:89] found id: ""
	I1212 01:07:10.095219  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.095227  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:10.095232  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:10.095284  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:10.129657  142150 cri.go:89] found id: ""
	I1212 01:07:10.129684  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.129695  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:10.129703  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:10.129759  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:10.164433  142150 cri.go:89] found id: ""
	I1212 01:07:10.164459  142150 logs.go:282] 0 containers: []
	W1212 01:07:10.164470  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:10.164483  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:10.164500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:10.178655  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:10.178687  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1212 01:07:08.842564  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.843885  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:08.095189  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:10.096580  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:09.707537  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.205862  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.207175  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	W1212 01:07:10.252370  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:10.252403  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:10.252421  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:10.329870  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:10.329914  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:10.377778  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:10.377812  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:12.929471  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:12.944591  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:12.944651  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:12.980053  142150 cri.go:89] found id: ""
	I1212 01:07:12.980079  142150 logs.go:282] 0 containers: []
	W1212 01:07:12.980088  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:12.980097  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:12.980182  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:13.021710  142150 cri.go:89] found id: ""
	I1212 01:07:13.021743  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.021752  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:13.021758  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:13.021828  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:13.060426  142150 cri.go:89] found id: ""
	I1212 01:07:13.060458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.060469  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:13.060477  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:13.060545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:13.097435  142150 cri.go:89] found id: ""
	I1212 01:07:13.097458  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.097466  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:13.097471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:13.097521  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:13.134279  142150 cri.go:89] found id: ""
	I1212 01:07:13.134314  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.134327  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:13.134335  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:13.134402  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:13.169942  142150 cri.go:89] found id: ""
	I1212 01:07:13.169971  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.169984  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:13.169992  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:13.170054  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:13.207495  142150 cri.go:89] found id: ""
	I1212 01:07:13.207526  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.207537  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:13.207550  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:13.207636  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:13.245214  142150 cri.go:89] found id: ""
	I1212 01:07:13.245240  142150 logs.go:282] 0 containers: []
	W1212 01:07:13.245248  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:13.245258  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:13.245272  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:13.301041  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:13.301081  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:13.316068  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:13.316104  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:13.391091  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:13.391120  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:13.391138  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:13.472090  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:13.472130  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:12.844629  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:15.344452  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:12.594761  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:14.595360  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:17.095340  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.707535  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.208767  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:16.013216  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:16.026636  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:16.026715  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:16.062126  142150 cri.go:89] found id: ""
	I1212 01:07:16.062157  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.062169  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:16.062177  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:16.062240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:16.097538  142150 cri.go:89] found id: ""
	I1212 01:07:16.097562  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.097572  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:16.097581  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:16.097637  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:16.133615  142150 cri.go:89] found id: ""
	I1212 01:07:16.133649  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.133661  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:16.133670  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:16.133732  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:16.169327  142150 cri.go:89] found id: ""
	I1212 01:07:16.169392  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.169414  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:16.169431  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:16.169538  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:16.214246  142150 cri.go:89] found id: ""
	I1212 01:07:16.214270  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.214278  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:16.214284  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:16.214342  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:16.251578  142150 cri.go:89] found id: ""
	I1212 01:07:16.251629  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.251641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:16.251649  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:16.251712  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:16.298772  142150 cri.go:89] found id: ""
	I1212 01:07:16.298802  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.298811  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:16.298818  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:16.298891  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:16.336901  142150 cri.go:89] found id: ""
	I1212 01:07:16.336937  142150 logs.go:282] 0 containers: []
	W1212 01:07:16.336946  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:16.336957  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:16.336969  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:16.389335  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:16.389376  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:16.403713  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:16.403743  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:16.485945  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:16.485972  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:16.485992  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:16.572137  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:16.572185  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.120296  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:19.133826  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:19.133902  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:19.174343  142150 cri.go:89] found id: ""
	I1212 01:07:19.174381  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.174391  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:19.174397  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:19.174449  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:19.212403  142150 cri.go:89] found id: ""
	I1212 01:07:19.212425  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.212433  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:19.212439  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:19.212488  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:19.247990  142150 cri.go:89] found id: ""
	I1212 01:07:19.248018  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.248027  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:19.248033  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:19.248088  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:19.286733  142150 cri.go:89] found id: ""
	I1212 01:07:19.286763  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.286775  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:19.286783  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:19.286848  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:19.325967  142150 cri.go:89] found id: ""
	I1212 01:07:19.325995  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.326006  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:19.326013  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:19.326073  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:19.361824  142150 cri.go:89] found id: ""
	I1212 01:07:19.361862  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.361874  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:19.361882  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:19.361951  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:19.399874  142150 cri.go:89] found id: ""
	I1212 01:07:19.399903  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.399915  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:19.399924  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:19.399978  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:19.444342  142150 cri.go:89] found id: ""
	I1212 01:07:19.444368  142150 logs.go:282] 0 containers: []
	W1212 01:07:19.444376  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:19.444386  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:19.444398  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:19.524722  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:19.524766  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:19.564941  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:19.564984  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:19.620881  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:19.620915  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:19.635038  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:19.635078  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:19.707819  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:17.851516  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:20.343210  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:19.596696  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.095982  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:21.706245  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:23.707282  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:22.208686  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:22.222716  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:22.222774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:22.258211  142150 cri.go:89] found id: ""
	I1212 01:07:22.258237  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.258245  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:22.258251  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:22.258299  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:22.294663  142150 cri.go:89] found id: ""
	I1212 01:07:22.294692  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.294701  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:22.294707  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:22.294771  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:22.331817  142150 cri.go:89] found id: ""
	I1212 01:07:22.331849  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.331861  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:22.331869  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:22.331927  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:22.373138  142150 cri.go:89] found id: ""
	I1212 01:07:22.373168  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.373176  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:22.373185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:22.373238  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:22.409864  142150 cri.go:89] found id: ""
	I1212 01:07:22.409903  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.409916  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:22.409927  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:22.409983  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:22.447498  142150 cri.go:89] found id: ""
	I1212 01:07:22.447531  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.447542  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:22.447549  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:22.447626  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:22.488674  142150 cri.go:89] found id: ""
	I1212 01:07:22.488715  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.488727  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:22.488735  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:22.488803  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:22.529769  142150 cri.go:89] found id: ""
	I1212 01:07:22.529797  142150 logs.go:282] 0 containers: []
	W1212 01:07:22.529806  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:22.529817  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:22.529837  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:22.611864  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:22.611889  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:22.611904  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:22.694660  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:22.694707  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:22.736800  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:22.736838  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:22.789670  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:22.789710  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:22.344482  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.844735  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:24.594999  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:26.595500  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:25.707950  141469 pod_ready.go:103] pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.200781  141469 pod_ready.go:82] duration metric: took 4m0.000776844s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:28.200837  141469 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-5bms9" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:28.200866  141469 pod_ready.go:39] duration metric: took 4m15.556500045s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:28.200916  141469 kubeadm.go:597] duration metric: took 4m22.571399912s to restartPrimaryControlPlane
	W1212 01:07:28.201043  141469 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:28.201086  141469 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:25.305223  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:25.318986  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:25.319057  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:25.356111  142150 cri.go:89] found id: ""
	I1212 01:07:25.356140  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.356150  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:25.356157  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:25.356223  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:25.396120  142150 cri.go:89] found id: ""
	I1212 01:07:25.396151  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.396163  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:25.396171  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:25.396236  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:25.436647  142150 cri.go:89] found id: ""
	I1212 01:07:25.436674  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.436681  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:25.436687  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:25.436744  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:25.475682  142150 cri.go:89] found id: ""
	I1212 01:07:25.475709  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.475721  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:25.475729  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:25.475791  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:25.512536  142150 cri.go:89] found id: ""
	I1212 01:07:25.512564  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.512576  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:25.512584  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:25.512655  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:25.549569  142150 cri.go:89] found id: ""
	I1212 01:07:25.549600  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.549609  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:25.549616  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:25.549681  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:25.585042  142150 cri.go:89] found id: ""
	I1212 01:07:25.585074  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.585089  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:25.585106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:25.585181  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:25.626257  142150 cri.go:89] found id: ""
	I1212 01:07:25.626283  142150 logs.go:282] 0 containers: []
	W1212 01:07:25.626291  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:25.626301  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:25.626314  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:25.679732  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:25.679773  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:25.693682  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:25.693711  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:25.770576  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:25.770599  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:25.770613  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:25.848631  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:25.848667  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.388387  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:28.404838  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:28.404925  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:28.447452  142150 cri.go:89] found id: ""
	I1212 01:07:28.447486  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.447498  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:28.447506  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:28.447581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:28.487285  142150 cri.go:89] found id: ""
	I1212 01:07:28.487312  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.487321  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:28.487326  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:28.487389  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:28.520403  142150 cri.go:89] found id: ""
	I1212 01:07:28.520433  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.520442  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:28.520448  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:28.520514  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:28.556671  142150 cri.go:89] found id: ""
	I1212 01:07:28.556703  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.556712  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:28.556720  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:28.556787  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:28.597136  142150 cri.go:89] found id: ""
	I1212 01:07:28.597165  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.597176  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:28.597185  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:28.597258  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:28.632603  142150 cri.go:89] found id: ""
	I1212 01:07:28.632633  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.632641  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:28.632648  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:28.632710  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:28.672475  142150 cri.go:89] found id: ""
	I1212 01:07:28.672512  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.672523  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:28.672530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:28.672581  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:28.715053  142150 cri.go:89] found id: ""
	I1212 01:07:28.715093  142150 logs.go:282] 0 containers: []
	W1212 01:07:28.715104  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:28.715114  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:28.715129  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:28.752978  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:28.753017  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:28.807437  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:28.807479  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:28.822196  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:28.822223  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:28.902592  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:28.902616  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:28.902630  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:27.343233  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:29.344194  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:28.596410  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.096062  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:31.486972  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:31.500676  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:31.500755  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:31.536877  142150 cri.go:89] found id: ""
	I1212 01:07:31.536911  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.536922  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:31.536931  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:31.537000  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:31.572637  142150 cri.go:89] found id: ""
	I1212 01:07:31.572670  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.572684  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:31.572692  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:31.572761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:31.610050  142150 cri.go:89] found id: ""
	I1212 01:07:31.610084  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.610097  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:31.610106  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:31.610159  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:31.645872  142150 cri.go:89] found id: ""
	I1212 01:07:31.645905  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.645918  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:31.645926  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:31.645988  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:31.682374  142150 cri.go:89] found id: ""
	I1212 01:07:31.682401  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.682409  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:31.682415  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:31.682464  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:31.724755  142150 cri.go:89] found id: ""
	I1212 01:07:31.724788  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.724801  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:31.724809  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:31.724877  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:31.760700  142150 cri.go:89] found id: ""
	I1212 01:07:31.760732  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.760741  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:31.760747  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:31.760823  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:31.794503  142150 cri.go:89] found id: ""
	I1212 01:07:31.794538  142150 logs.go:282] 0 containers: []
	W1212 01:07:31.794549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:31.794562  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:31.794577  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:31.837103  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:31.837139  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:31.889104  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:31.889142  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:31.905849  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:31.905883  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:31.983351  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:31.983372  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:31.983388  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:34.564505  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:34.577808  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:34.577884  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:34.616950  142150 cri.go:89] found id: ""
	I1212 01:07:34.616979  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.616992  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:34.617001  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:34.617071  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:34.653440  142150 cri.go:89] found id: ""
	I1212 01:07:34.653470  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.653478  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:34.653485  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:34.653535  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:34.693426  142150 cri.go:89] found id: ""
	I1212 01:07:34.693457  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.693465  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:34.693471  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:34.693520  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:34.727113  142150 cri.go:89] found id: ""
	I1212 01:07:34.727154  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.727166  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:34.727175  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:34.727237  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:34.766942  142150 cri.go:89] found id: ""
	I1212 01:07:34.766967  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.766974  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:34.766981  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:34.767032  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:34.806189  142150 cri.go:89] found id: ""
	I1212 01:07:34.806214  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.806223  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:34.806229  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:34.806293  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:34.839377  142150 cri.go:89] found id: ""
	I1212 01:07:34.839408  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.839420  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:34.839429  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:34.839486  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:34.877512  142150 cri.go:89] found id: ""
	I1212 01:07:34.877541  142150 logs.go:282] 0 containers: []
	W1212 01:07:34.877549  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:34.877558  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:34.877570  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:34.914966  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:34.914994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:34.964993  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:34.965033  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:34.979644  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:34.979677  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:35.050842  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:35.050868  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:35.050893  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:31.843547  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.843911  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:36.343719  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:33.595369  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:35.600094  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:37.634362  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:37.647476  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:37.647542  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:37.681730  142150 cri.go:89] found id: ""
	I1212 01:07:37.681760  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.681768  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:37.681775  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:37.681827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:37.716818  142150 cri.go:89] found id: ""
	I1212 01:07:37.716845  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.716858  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:37.716864  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:37.716913  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:37.753005  142150 cri.go:89] found id: ""
	I1212 01:07:37.753034  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.753042  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:37.753048  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:37.753104  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:37.789850  142150 cri.go:89] found id: ""
	I1212 01:07:37.789888  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.789900  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:37.789909  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:37.789971  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:37.826418  142150 cri.go:89] found id: ""
	I1212 01:07:37.826455  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.826466  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:37.826475  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:37.826539  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:37.862108  142150 cri.go:89] found id: ""
	I1212 01:07:37.862134  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.862143  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:37.862149  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:37.862202  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:37.897622  142150 cri.go:89] found id: ""
	I1212 01:07:37.897660  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.897673  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:37.897681  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:37.897743  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:37.935027  142150 cri.go:89] found id: ""
	I1212 01:07:37.935055  142150 logs.go:282] 0 containers: []
	W1212 01:07:37.935063  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:37.935072  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:37.935088  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:37.949860  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:37.949890  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:38.019692  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:38.019721  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:38.019740  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:38.100964  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:38.100994  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:38.144480  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:38.144514  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:38.844539  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.844997  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:38.096180  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:40.699192  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:40.712311  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:40.712398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:40.748454  142150 cri.go:89] found id: ""
	I1212 01:07:40.748482  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.748490  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:40.748496  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:40.748545  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:40.785262  142150 cri.go:89] found id: ""
	I1212 01:07:40.785292  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.785305  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:40.785312  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:40.785376  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:40.821587  142150 cri.go:89] found id: ""
	I1212 01:07:40.821624  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.821636  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:40.821644  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:40.821713  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:40.882891  142150 cri.go:89] found id: ""
	I1212 01:07:40.882918  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.882926  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:40.882935  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:40.882987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:40.923372  142150 cri.go:89] found id: ""
	I1212 01:07:40.923403  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.923412  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:40.923419  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:40.923485  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:40.962753  142150 cri.go:89] found id: ""
	I1212 01:07:40.962781  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.962789  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:40.962795  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:40.962851  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:40.996697  142150 cri.go:89] found id: ""
	I1212 01:07:40.996731  142150 logs.go:282] 0 containers: []
	W1212 01:07:40.996744  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:40.996751  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:40.996812  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:41.031805  142150 cri.go:89] found id: ""
	I1212 01:07:41.031842  142150 logs.go:282] 0 containers: []
	W1212 01:07:41.031855  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:41.031866  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:41.031884  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:41.108288  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:41.108310  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:41.108333  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:41.190075  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:41.190115  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:41.235886  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:41.235927  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:41.288515  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:41.288554  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:43.803694  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:43.817859  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:43.817919  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:43.864193  142150 cri.go:89] found id: ""
	I1212 01:07:43.864221  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.864228  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:43.864234  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:43.864288  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:43.902324  142150 cri.go:89] found id: ""
	I1212 01:07:43.902359  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.902371  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:43.902379  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:43.902443  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:43.940847  142150 cri.go:89] found id: ""
	I1212 01:07:43.940880  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.940890  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:43.940896  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:43.940947  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:43.979270  142150 cri.go:89] found id: ""
	I1212 01:07:43.979302  142150 logs.go:282] 0 containers: []
	W1212 01:07:43.979314  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:43.979322  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:43.979398  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:44.024819  142150 cri.go:89] found id: ""
	I1212 01:07:44.024851  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.024863  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:44.024872  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:44.024941  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:44.062199  142150 cri.go:89] found id: ""
	I1212 01:07:44.062225  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.062234  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:44.062242  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:44.062306  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:44.097158  142150 cri.go:89] found id: ""
	I1212 01:07:44.097181  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.097188  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:44.097194  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:44.097240  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:44.132067  142150 cri.go:89] found id: ""
	I1212 01:07:44.132105  142150 logs.go:282] 0 containers: []
	W1212 01:07:44.132120  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:44.132132  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:44.132148  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:44.179552  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:44.179589  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:44.238243  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:44.238299  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:44.255451  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:44.255493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:44.331758  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:44.331784  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:44.331797  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:43.343026  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.343118  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:42.595856  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:45.096338  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:46.916033  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:46.929686  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:46.929761  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:46.966328  142150 cri.go:89] found id: ""
	I1212 01:07:46.966357  142150 logs.go:282] 0 containers: []
	W1212 01:07:46.966365  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:46.966371  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:46.966423  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:47.002014  142150 cri.go:89] found id: ""
	I1212 01:07:47.002059  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.002074  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:47.002082  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:47.002148  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:47.038127  142150 cri.go:89] found id: ""
	I1212 01:07:47.038158  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.038166  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:47.038172  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:47.038222  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:47.071654  142150 cri.go:89] found id: ""
	I1212 01:07:47.071684  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.071696  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:47.071704  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:47.071774  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:47.105489  142150 cri.go:89] found id: ""
	I1212 01:07:47.105515  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.105524  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:47.105530  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:47.105577  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.143005  142150 cri.go:89] found id: ""
	I1212 01:07:47.143042  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.143051  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:47.143058  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:47.143114  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:47.176715  142150 cri.go:89] found id: ""
	I1212 01:07:47.176746  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.176756  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:47.176764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:47.176827  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:47.211770  142150 cri.go:89] found id: ""
	I1212 01:07:47.211806  142150 logs.go:282] 0 containers: []
	W1212 01:07:47.211817  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:47.211831  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:47.211850  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:47.312766  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:47.312795  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:47.312811  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:47.402444  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:47.402493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:47.441071  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:47.441109  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:47.494465  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:47.494507  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.009996  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:50.023764  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:07:50.023832  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:07:50.060392  142150 cri.go:89] found id: ""
	I1212 01:07:50.060424  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.060433  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:07:50.060440  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:07:50.060497  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:07:50.094874  142150 cri.go:89] found id: ""
	I1212 01:07:50.094904  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.094914  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:07:50.094923  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:07:50.094987  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:07:50.128957  142150 cri.go:89] found id: ""
	I1212 01:07:50.128986  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.128996  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:07:50.129005  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:07:50.129067  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:07:50.164794  142150 cri.go:89] found id: ""
	I1212 01:07:50.164819  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.164828  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:07:50.164835  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:07:50.164890  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:07:50.201295  142150 cri.go:89] found id: ""
	I1212 01:07:50.201330  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.201342  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:07:50.201350  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:07:50.201415  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:07:47.343485  141884 pod_ready.go:103] pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:48.337317  141884 pod_ready.go:82] duration metric: took 4m0.000178627s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" ...
	E1212 01:07:48.337358  141884 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-k9s7n" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:07:48.337386  141884 pod_ready.go:39] duration metric: took 4m14.601527023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:07:48.337421  141884 kubeadm.go:597] duration metric: took 4m22.883520304s to restartPrimaryControlPlane
	W1212 01:07:48.337486  141884 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:48.337526  141884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:47.595092  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:50.096774  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.514069  141469 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.312952103s)
	I1212 01:07:54.514153  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:54.543613  141469 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:54.555514  141469 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:54.569001  141469 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:54.569024  141469 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:54.569082  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:54.583472  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:54.583553  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:54.598721  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:54.614369  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:54.614451  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:54.625630  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.643317  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:54.643398  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:54.652870  141469 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:54.662703  141469 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:54.662774  141469 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:54.672601  141469 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:54.722949  141469 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:07:54.723064  141469 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:54.845332  141469 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:54.845476  141469 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:54.845623  141469 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:07:54.855468  141469 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:50.236158  142150 cri.go:89] found id: ""
	I1212 01:07:50.236200  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.236212  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:07:50.236221  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:07:50.236271  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:07:50.270232  142150 cri.go:89] found id: ""
	I1212 01:07:50.270268  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.270280  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:07:50.270288  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:07:50.270356  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:07:50.303222  142150 cri.go:89] found id: ""
	I1212 01:07:50.303247  142150 logs.go:282] 0 containers: []
	W1212 01:07:50.303258  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:07:50.303270  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:07:50.303288  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 01:07:50.316845  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:07:50.316874  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:07:50.384455  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:07:50.384483  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:07:50.384500  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:07:50.462863  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:07:50.462921  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:07:50.503464  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:07:50.503495  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:07:53.063953  142150 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:07:53.079946  142150 kubeadm.go:597] duration metric: took 4m3.966538012s to restartPrimaryControlPlane
	W1212 01:07:53.080031  142150 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:07:53.080064  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:07:54.857558  141469 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:54.857689  141469 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:54.857774  141469 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:54.857890  141469 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:54.857960  141469 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:54.858038  141469 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:54.858109  141469 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:54.858214  141469 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:54.858296  141469 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:54.858396  141469 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:54.858503  141469 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:54.858557  141469 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:54.858643  141469 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:55.129859  141469 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:55.274235  141469 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:07:55.401999  141469 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:56.015091  141469 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:56.123268  141469 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:56.123820  141469 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:56.126469  141469 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:52.595027  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:54.595374  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:57.096606  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:07:58.255454  142150 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.175361092s)
	I1212 01:07:58.255545  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:07:58.270555  142150 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:07:58.281367  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:07:58.291555  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:07:58.291580  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:07:58.291652  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:07:58.301408  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:07:58.301473  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:07:58.314324  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:07:58.326559  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:07:58.326628  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:07:58.338454  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.348752  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:07:58.348815  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:07:58.361968  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:07:58.374545  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:07:58.374614  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:07:58.387280  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:07:58.474893  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:07:58.475043  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:07:58.647222  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:07:58.647400  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:07:58.647566  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:07:58.839198  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:07:56.128185  141469 out.go:235]   - Booting up control plane ...
	I1212 01:07:56.128343  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:56.128478  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:56.128577  141469 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:56.149476  141469 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:56.156042  141469 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:56.156129  141469 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:56.292423  141469 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:07:56.292567  141469 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:07:56.794594  141469 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.027526ms
	I1212 01:07:56.794711  141469 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:07:58.841061  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:07:58.841173  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:07:58.841297  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:07:58.841411  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:07:58.841491  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:07:58.841575  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:07:58.841650  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:07:58.841771  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:07:58.842200  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:07:58.842503  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:07:58.842993  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:07:58.843207  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:07:58.843355  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:07:58.919303  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:07:59.206038  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:07:59.318620  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:07:59.693734  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:07:59.709562  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:07:59.710774  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:07:59.710846  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:07:59.877625  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:07:59.879576  142150 out.go:235]   - Booting up control plane ...
	I1212 01:07:59.879733  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:07:59.892655  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:07:59.894329  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:07:59.897694  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:07:59.898269  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:07:59.594764  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:01.595663  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:02.299386  141469 kubeadm.go:310] [api-check] The API server is healthy after 5.503154599s
	I1212 01:08:02.311549  141469 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:02.326944  141469 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:02.354402  141469 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:02.354661  141469 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-607268 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:02.368168  141469 kubeadm.go:310] [bootstrap-token] Using token: 0eo07f.wy46ulxfywwd0uy8
	I1212 01:08:02.369433  141469 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:02.369569  141469 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:02.381945  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:02.407880  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:02.419211  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:02.426470  141469 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:02.437339  141469 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:02.708518  141469 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:03.143189  141469 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:03.704395  141469 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:03.705460  141469 kubeadm.go:310] 
	I1212 01:08:03.705557  141469 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:03.705576  141469 kubeadm.go:310] 
	I1212 01:08:03.705646  141469 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:03.705650  141469 kubeadm.go:310] 
	I1212 01:08:03.705672  141469 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:03.705724  141469 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:03.705768  141469 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:03.705800  141469 kubeadm.go:310] 
	I1212 01:08:03.705906  141469 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:03.705918  141469 kubeadm.go:310] 
	I1212 01:08:03.705976  141469 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:03.705987  141469 kubeadm.go:310] 
	I1212 01:08:03.706073  141469 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:03.706191  141469 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:03.706286  141469 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:03.706307  141469 kubeadm.go:310] 
	I1212 01:08:03.706438  141469 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:03.706549  141469 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:03.706556  141469 kubeadm.go:310] 
	I1212 01:08:03.706670  141469 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.706833  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:03.706864  141469 kubeadm.go:310] 	--control-plane 
	I1212 01:08:03.706869  141469 kubeadm.go:310] 
	I1212 01:08:03.706951  141469 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:03.706963  141469 kubeadm.go:310] 
	I1212 01:08:03.707035  141469 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0eo07f.wy46ulxfywwd0uy8 \
	I1212 01:08:03.707134  141469 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:03.708092  141469 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:03.708135  141469 cni.go:84] Creating CNI manager for ""
	I1212 01:08:03.708146  141469 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:03.709765  141469 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:03.711315  141469 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:03.724767  141469 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:03.749770  141469 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:03.749830  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:03.749896  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-607268 minikube.k8s.io/updated_at=2024_12_12T01_08_03_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=embed-certs-607268 minikube.k8s.io/primary=true
	I1212 01:08:03.973050  141469 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:03.973436  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.094838  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:06.095216  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:04.473952  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:04.974222  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.473799  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:05.974261  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.473492  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:06.974288  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.474064  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:07.974218  141469 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:08.081567  141469 kubeadm.go:1113] duration metric: took 4.331794716s to wait for elevateKubeSystemPrivileges
	I1212 01:08:08.081603  141469 kubeadm.go:394] duration metric: took 5m2.502707851s to StartCluster
	I1212 01:08:08.081629  141469 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.081722  141469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:08.083443  141469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:08.083783  141469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.151 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:08.083894  141469 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:08.084015  141469 config.go:182] Loaded profile config "embed-certs-607268": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:08.084027  141469 addons.go:69] Setting metrics-server=true in profile "embed-certs-607268"
	I1212 01:08:08.084045  141469 addons.go:234] Setting addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:08.084014  141469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-607268"
	I1212 01:08:08.084054  141469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-607268"
	I1212 01:08:08.084083  141469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-607268"
	I1212 01:08:08.084085  141469 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-607268"
	W1212 01:08:08.084130  141469 addons.go:243] addon storage-provisioner should already be in state true
	W1212 01:08:08.084057  141469 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084190  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.084618  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084658  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084671  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084684  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.084617  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.084756  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.085205  141469 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:08.086529  141469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:08.104090  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I1212 01:08:08.104115  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I1212 01:08:08.104092  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I1212 01:08:08.104662  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104701  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.104785  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105323  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105329  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105337  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105314  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.105382  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.105696  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105718  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.105700  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.106132  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106163  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.106364  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.106599  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.106626  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.110390  141469 addons.go:234] Setting addon default-storageclass=true in "embed-certs-607268"
	W1212 01:08:08.110415  141469 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:08.110447  141469 host.go:66] Checking if "embed-certs-607268" exists ...
	I1212 01:08:08.110811  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.110844  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.124380  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35103
	I1212 01:08:08.124888  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.125447  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.125472  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.125764  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.125966  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.126885  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1212 01:08:08.127417  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.127718  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I1212 01:08:08.127911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.127990  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128002  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.128161  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.128338  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.128541  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.128612  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.128626  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.129037  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.129640  141469 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:08.129678  141469 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:08.129905  141469 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:08.131337  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:08.131367  141469 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:08.131387  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.131816  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.133335  141469 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:08.134372  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.134696  141469 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.134714  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:08.134734  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.134851  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.134868  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.135026  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.135247  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.135405  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.135549  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.137253  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137705  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.137725  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.137810  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.137911  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.138065  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.138162  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.146888  141469 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34177
	I1212 01:08:08.147344  141469 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:08.147919  141469 main.go:141] libmachine: Using API Version  1
	I1212 01:08:08.147937  141469 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:08.148241  141469 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:08.148418  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetState
	I1212 01:08:08.150018  141469 main.go:141] libmachine: (embed-certs-607268) Calling .DriverName
	I1212 01:08:08.150282  141469 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.150299  141469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:08.150318  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHHostname
	I1212 01:08:08.152881  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153311  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHPort
	I1212 01:08:08.153327  141469 main.go:141] libmachine: (embed-certs-607268) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:f0:cf", ip: ""} in network mk-embed-certs-607268: {Iface:virbr2 ExpiryTime:2024-12-12 02:02:51 +0000 UTC Type:0 Mac:52:54:00:64:f0:cf Iaid: IPaddr:192.168.50.151 Prefix:24 Hostname:embed-certs-607268 Clientid:01:52:54:00:64:f0:cf}
	I1212 01:08:08.153344  141469 main.go:141] libmachine: (embed-certs-607268) DBG | domain embed-certs-607268 has defined IP address 192.168.50.151 and MAC address 52:54:00:64:f0:cf in network mk-embed-certs-607268
	I1212 01:08:08.153509  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHKeyPath
	I1212 01:08:08.153634  141469 main.go:141] libmachine: (embed-certs-607268) Calling .GetSSHUsername
	I1212 01:08:08.153816  141469 sshutil.go:53] new ssh client: &{IP:192.168.50.151 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/embed-certs-607268/id_rsa Username:docker}
	I1212 01:08:08.301991  141469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:08.323794  141469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338205  141469 node_ready.go:49] node "embed-certs-607268" has status "Ready":"True"
	I1212 01:08:08.338241  141469 node_ready.go:38] duration metric: took 14.401624ms for node "embed-certs-607268" to be "Ready" ...
	I1212 01:08:08.338255  141469 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:08.355801  141469 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:08.406624  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:08.406648  141469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:08.409497  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:08.456893  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:08.456917  141469 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:08.554996  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:08.558767  141469 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.558793  141469 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:08.614574  141469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:08.702483  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702513  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.702818  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.702883  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.702894  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.702904  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.702912  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.703142  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.703186  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:08.703163  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:08.714426  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:08.714450  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:08.714840  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:08.714857  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.821732  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.266688284s)
	I1212 01:08:09.821807  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.821824  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822160  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822185  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.822211  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.822225  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.822487  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.822518  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.822535  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842157  141469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.227536232s)
	I1212 01:08:09.842222  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842237  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.842627  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.842663  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.842672  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.842679  141469 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:09.842687  141469 main.go:141] libmachine: (embed-certs-607268) Calling .Close
	I1212 01:08:09.843002  141469 main.go:141] libmachine: (embed-certs-607268) DBG | Closing plugin on server side
	I1212 01:08:09.843013  141469 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:09.843028  141469 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:09.843046  141469 addons.go:475] Verifying addon metrics-server=true in "embed-certs-607268"
	I1212 01:08:09.844532  141469 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:08.098516  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:10.596197  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:09.845721  141469 addons.go:510] duration metric: took 1.761839241s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:10.400164  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:12.862616  141469 pod_ready.go:103] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:14.362448  141469 pod_ready.go:93] pod "etcd-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.362473  141469 pod_ready.go:82] duration metric: took 6.006632075s for pod "etcd-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.362486  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868198  141469 pod_ready.go:93] pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.868220  141469 pod_ready.go:82] duration metric: took 505.72656ms for pod "kube-apiserver-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.868231  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872557  141469 pod_ready.go:93] pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.872582  141469 pod_ready.go:82] duration metric: took 4.343797ms for pod "kube-controller-manager-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.872599  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876837  141469 pod_ready.go:93] pod "kube-proxy-6hw4b" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.876858  141469 pod_ready.go:82] duration metric: took 4.251529ms for pod "kube-proxy-6hw4b" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.876867  141469 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881467  141469 pod_ready.go:93] pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:14.881487  141469 pod_ready.go:82] duration metric: took 4.612567ms for pod "kube-scheduler-embed-certs-607268" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:14.881496  141469 pod_ready.go:39] duration metric: took 6.543228562s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:14.881516  141469 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:14.881571  141469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:14.898899  141469 api_server.go:72] duration metric: took 6.815070313s to wait for apiserver process to appear ...
	I1212 01:08:14.898942  141469 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:14.898963  141469 api_server.go:253] Checking apiserver healthz at https://192.168.50.151:8443/healthz ...
	I1212 01:08:14.904555  141469 api_server.go:279] https://192.168.50.151:8443/healthz returned 200:
	ok
	I1212 01:08:14.905738  141469 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:14.905762  141469 api_server.go:131] duration metric: took 6.812513ms to wait for apiserver health ...
	I1212 01:08:14.905771  141469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:14.964381  141469 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:14.964413  141469 system_pods.go:61] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:14.964418  141469 system_pods.go:61] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:14.964422  141469 system_pods.go:61] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:14.964426  141469 system_pods.go:61] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:14.964429  141469 system_pods.go:61] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:14.964432  141469 system_pods.go:61] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:14.964435  141469 system_pods.go:61] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:14.964441  141469 system_pods.go:61] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:14.964447  141469 system_pods.go:61] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:14.964460  141469 system_pods.go:74] duration metric: took 58.68072ms to wait for pod list to return data ...
	I1212 01:08:14.964476  141469 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:15.161106  141469 default_sa.go:45] found service account: "default"
	I1212 01:08:15.161137  141469 default_sa.go:55] duration metric: took 196.651344ms for default service account to be created ...
	I1212 01:08:15.161147  141469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:15.363429  141469 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:15.363457  141469 system_pods.go:89] "coredns-7c65d6cfc9-m27d6" [8420ab7f-7518-41da-a83f-8339380f5bff] Running
	I1212 01:08:15.363462  141469 system_pods.go:89] "coredns-7c65d6cfc9-m7b7f" [02e714b4-3e8d-4c9d-90e3-6fba636190fa] Running
	I1212 01:08:15.363466  141469 system_pods.go:89] "etcd-embed-certs-607268" [b14ae8d6-66d7-4dee-b1bd-893763cbbc01] Running
	I1212 01:08:15.363470  141469 system_pods.go:89] "kube-apiserver-embed-certs-607268" [a35df51d-b748-461e-901b-5f74640b090a] Running
	I1212 01:08:15.363473  141469 system_pods.go:89] "kube-controller-manager-embed-certs-607268" [9f519f46-fc56-4f11-9fa9-8657ff29e1af] Running
	I1212 01:08:15.363477  141469 system_pods.go:89] "kube-proxy-6hw4b" [2ae27b6f-a174-42eb-96a7-2e94f0f916c1] Running
	I1212 01:08:15.363480  141469 system_pods.go:89] "kube-scheduler-embed-certs-607268" [b17ebabb-be6d-4404-b6ce-bd6aa728dcde] Running
	I1212 01:08:15.363487  141469 system_pods.go:89] "metrics-server-6867b74b74-glcnv" [3c8b3109-dfcf-4329-84ff-a4c5b566b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:15.363492  141469 system_pods.go:89] "storage-provisioner" [d2421890-0e6b-4d0b-8967-6f0103e90996] Running
	I1212 01:08:15.363501  141469 system_pods.go:126] duration metric: took 202.347796ms to wait for k8s-apps to be running ...
	I1212 01:08:15.363508  141469 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:15.363553  141469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:15.378498  141469 system_svc.go:56] duration metric: took 14.977368ms WaitForService to wait for kubelet
	I1212 01:08:15.378527  141469 kubeadm.go:582] duration metric: took 7.294704666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:15.378545  141469 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:15.561384  141469 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:15.561408  141469 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:15.561422  141469 node_conditions.go:105] duration metric: took 182.869791ms to run NodePressure ...
	I1212 01:08:15.561435  141469 start.go:241] waiting for startup goroutines ...
	I1212 01:08:15.561442  141469 start.go:246] waiting for cluster config update ...
	I1212 01:08:15.561453  141469 start.go:255] writing updated cluster config ...
	I1212 01:08:15.561693  141469 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:15.615106  141469 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:15.617073  141469 out.go:177] * Done! kubectl is now configured to use "embed-certs-607268" cluster and "default" namespace by default
	I1212 01:08:14.771660  141884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.434092304s)
	I1212 01:08:14.771750  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:14.802721  141884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:08:14.813349  141884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:08:14.826608  141884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:08:14.826637  141884 kubeadm.go:157] found existing configuration files:
	
	I1212 01:08:14.826693  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1212 01:08:14.842985  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:08:14.843060  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:08:14.855326  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1212 01:08:14.872371  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:08:14.872449  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:08:14.883793  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.894245  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:08:14.894306  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:08:14.906163  141884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1212 01:08:14.915821  141884 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:08:14.915867  141884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:08:14.926019  141884 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:08:15.092424  141884 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:08:13.094823  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:15.096259  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:17.596953  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:20.095957  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:22.096970  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:23.562216  141884 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:08:23.562302  141884 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:08:23.562463  141884 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:08:23.562655  141884 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:08:23.562786  141884 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:08:23.562870  141884 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:08:23.564412  141884 out.go:235]   - Generating certificates and keys ...
	I1212 01:08:23.564519  141884 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:08:23.564605  141884 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:08:23.564718  141884 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:08:23.564802  141884 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:08:23.564879  141884 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:08:23.564925  141884 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:08:23.565011  141884 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:08:23.565110  141884 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:08:23.565230  141884 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:08:23.565352  141884 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:08:23.565393  141884 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:08:23.565439  141884 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:08:23.565485  141884 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:08:23.565537  141884 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:08:23.565582  141884 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:08:23.565636  141884 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:08:23.565700  141884 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:08:23.565786  141884 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:08:23.565885  141884 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:08:23.567104  141884 out.go:235]   - Booting up control plane ...
	I1212 01:08:23.567195  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:08:23.567267  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:08:23.567353  141884 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:08:23.567472  141884 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:08:23.567579  141884 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:08:23.567662  141884 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:08:23.567812  141884 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:08:23.567953  141884 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:08:23.568010  141884 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001996966s
	I1212 01:08:23.568071  141884 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:08:23.568125  141884 kubeadm.go:310] [api-check] The API server is healthy after 5.001946459s
	I1212 01:08:23.568266  141884 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:08:23.568424  141884 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:08:23.568510  141884 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:08:23.568702  141884 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-076578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:08:23.568789  141884 kubeadm.go:310] [bootstrap-token] Using token: 472xql.x3zqihc9l5oj308m
	I1212 01:08:23.570095  141884 out.go:235]   - Configuring RBAC rules ...
	I1212 01:08:23.570226  141884 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:08:23.570353  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:08:23.570550  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:08:23.570719  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:08:23.570880  141884 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:08:23.571006  141884 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:08:23.571186  141884 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:08:23.571245  141884 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:08:23.571322  141884 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:08:23.571333  141884 kubeadm.go:310] 
	I1212 01:08:23.571411  141884 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:08:23.571421  141884 kubeadm.go:310] 
	I1212 01:08:23.571530  141884 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:08:23.571551  141884 kubeadm.go:310] 
	I1212 01:08:23.571609  141884 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:08:23.571711  141884 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:08:23.571795  141884 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:08:23.571808  141884 kubeadm.go:310] 
	I1212 01:08:23.571892  141884 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:08:23.571907  141884 kubeadm.go:310] 
	I1212 01:08:23.571985  141884 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:08:23.571992  141884 kubeadm.go:310] 
	I1212 01:08:23.572069  141884 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:08:23.572184  141884 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:08:23.572276  141884 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:08:23.572286  141884 kubeadm.go:310] 
	I1212 01:08:23.572413  141884 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:08:23.572516  141884 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:08:23.572525  141884 kubeadm.go:310] 
	I1212 01:08:23.572656  141884 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.572805  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:08:23.572847  141884 kubeadm.go:310] 	--control-plane 
	I1212 01:08:23.572856  141884 kubeadm.go:310] 
	I1212 01:08:23.572973  141884 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:08:23.572991  141884 kubeadm.go:310] 
	I1212 01:08:23.573107  141884 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 472xql.x3zqihc9l5oj308m \
	I1212 01:08:23.573248  141884 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:08:23.573273  141884 cni.go:84] Creating CNI manager for ""
	I1212 01:08:23.573283  141884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:08:23.574736  141884 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:08:23.575866  141884 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:08:23.590133  141884 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:08:23.613644  141884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:08:23.613737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:23.613759  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-076578 minikube.k8s.io/updated_at=2024_12_12T01_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=default-k8s-diff-port-076578 minikube.k8s.io/primary=true
	I1212 01:08:23.642646  141884 ops.go:34] apiserver oom_adj: -16
	I1212 01:08:23.831478  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.331749  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.832158  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.331630  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:25.831737  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:26.331787  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:24.597126  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:27.095607  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:26.831860  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.331748  141884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:08:27.448891  141884 kubeadm.go:1113] duration metric: took 3.835231667s to wait for elevateKubeSystemPrivileges
	I1212 01:08:27.448930  141884 kubeadm.go:394] duration metric: took 5m2.053707834s to StartCluster
	I1212 01:08:27.448957  141884 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.449060  141884 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:08:27.450918  141884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:08:27.451183  141884 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:08:27.451263  141884 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:08:27.451385  141884 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451409  141884 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451417  141884 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:08:27.451413  141884 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451449  141884 config.go:182] Loaded profile config "default-k8s-diff-port-076578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 01:08:27.451454  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451465  141884 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-076578"
	I1212 01:08:27.451423  141884 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-076578"
	I1212 01:08:27.451570  141884 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.451586  141884 addons.go:243] addon metrics-server should already be in state true
	I1212 01:08:27.451648  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.451876  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451905  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.451927  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.451942  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452055  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.452096  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.452939  141884 out.go:177] * Verifying Kubernetes components...
	I1212 01:08:27.454521  141884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:08:27.467512  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1212 01:08:27.467541  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I1212 01:08:27.467581  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41267
	I1212 01:08:27.468032  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468069  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468039  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.468580  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468592  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468604  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468609  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468620  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.468635  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.468968  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.468999  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.469191  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.469562  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469579  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.469613  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.469623  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.472898  141884 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-076578"
	W1212 01:08:27.472925  141884 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:08:27.472956  141884 host.go:66] Checking if "default-k8s-diff-port-076578" exists ...
	I1212 01:08:27.473340  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.473389  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.485014  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I1212 01:08:27.485438  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.486058  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.486077  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.486629  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.486832  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.487060  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I1212 01:08:27.487779  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.488503  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.488527  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.488910  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.489132  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.489304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.489892  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42411
	I1212 01:08:27.490599  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.490758  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.491213  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.491236  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.491385  141884 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:08:27.491606  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.492230  141884 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:08:27.492375  141884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:08:27.492420  141884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:08:27.493368  141884 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.493382  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:08:27.493397  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.493462  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:08:27.493468  141884 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:08:27.493481  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.496807  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497273  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.497304  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497474  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.497647  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.497691  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.497771  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.497922  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.498178  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.498190  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.498288  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.498467  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.498634  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.498779  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.512025  141884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1212 01:08:27.512490  141884 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:08:27.513168  141884 main.go:141] libmachine: Using API Version  1
	I1212 01:08:27.513187  141884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:08:27.513474  141884 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:08:27.513664  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetState
	I1212 01:08:27.514930  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .DriverName
	I1212 01:08:27.515106  141884 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.515119  141884 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:08:27.515131  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHHostname
	I1212 01:08:27.520051  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520084  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:0c:23", ip: ""} in network mk-default-k8s-diff-port-076578: {Iface:virbr1 ExpiryTime:2024-12-12 02:03:11 +0000 UTC Type:0 Mac:52:54:00:4f:0c:23 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:default-k8s-diff-port-076578 Clientid:01:52:54:00:4f:0c:23}
	I1212 01:08:27.520183  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | domain default-k8s-diff-port-076578 has defined IP address 192.168.39.174 and MAC address 52:54:00:4f:0c:23 in network mk-default-k8s-diff-port-076578
	I1212 01:08:27.520419  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHPort
	I1212 01:08:27.520574  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHKeyPath
	I1212 01:08:27.520737  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .GetSSHUsername
	I1212 01:08:27.520828  141884 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/default-k8s-diff-port-076578/id_rsa Username:docker}
	I1212 01:08:27.692448  141884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:08:27.712214  141884 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724269  141884 node_ready.go:49] node "default-k8s-diff-port-076578" has status "Ready":"True"
	I1212 01:08:27.724301  141884 node_ready.go:38] duration metric: took 12.044784ms for node "default-k8s-diff-port-076578" to be "Ready" ...
	I1212 01:08:27.724313  141884 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:27.729135  141884 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:27.768566  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:08:27.768596  141884 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:08:27.782958  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:08:27.797167  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:08:27.797190  141884 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:08:27.828960  141884 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:27.828983  141884 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:08:27.871251  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:08:27.883614  141884 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:08:28.198044  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198090  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198457  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198510  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198522  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.198532  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.198544  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.198817  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.198815  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.198844  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.277379  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.277405  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.277719  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.277741  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955418  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084128053s)
	I1212 01:08:28.955472  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955485  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955561  141884 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.071904294s)
	I1212 01:08:28.955624  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955646  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.955856  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.955874  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.955881  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.955888  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.957731  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957740  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) DBG | Closing plugin on server side
	I1212 01:08:28.957748  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957761  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957802  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.957814  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.957823  141884 main.go:141] libmachine: Making call to close driver server
	I1212 01:08:28.957836  141884 main.go:141] libmachine: (default-k8s-diff-port-076578) Calling .Close
	I1212 01:08:28.958072  141884 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:08:28.958090  141884 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:08:28.958100  141884 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-076578"
	I1212 01:08:28.959879  141884 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:08:28.961027  141884 addons.go:510] duration metric: took 1.509771178s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:08:29.241061  141884 pod_ready.go:93] pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:29.241090  141884 pod_ready.go:82] duration metric: took 1.511925292s for pod "etcd-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:29.241106  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:31.247610  141884 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:29.095906  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:31.593942  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:33.246910  141884 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.246933  141884 pod_ready.go:82] duration metric: took 4.005818542s for pod "kube-apiserver-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.246944  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753325  141884 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.753350  141884 pod_ready.go:82] duration metric: took 506.39921ms for pod "kube-controller-manager-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.753360  141884 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758733  141884 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace has status "Ready":"True"
	I1212 01:08:33.758759  141884 pod_ready.go:82] duration metric: took 5.391762ms for pod "kube-scheduler-default-k8s-diff-port-076578" in "kube-system" namespace to be "Ready" ...
	I1212 01:08:33.758769  141884 pod_ready.go:39] duration metric: took 6.034446537s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:33.758789  141884 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:08:33.758854  141884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:08:33.774952  141884 api_server.go:72] duration metric: took 6.323732468s to wait for apiserver process to appear ...
	I1212 01:08:33.774976  141884 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:08:33.774995  141884 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8444/healthz ...
	I1212 01:08:33.780463  141884 api_server.go:279] https://192.168.39.174:8444/healthz returned 200:
	ok
	I1212 01:08:33.781364  141884 api_server.go:141] control plane version: v1.31.2
	I1212 01:08:33.781387  141884 api_server.go:131] duration metric: took 6.404187ms to wait for apiserver health ...
	I1212 01:08:33.781396  141884 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:08:33.786570  141884 system_pods.go:59] 9 kube-system pods found
	I1212 01:08:33.786591  141884 system_pods.go:61] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.786596  141884 system_pods.go:61] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.786599  141884 system_pods.go:61] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.786603  141884 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.786606  141884 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.786610  141884 system_pods.go:61] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.786615  141884 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.786623  141884 system_pods.go:61] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.786630  141884 system_pods.go:61] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.786643  141884 system_pods.go:74] duration metric: took 5.239236ms to wait for pod list to return data ...
	I1212 01:08:33.786655  141884 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:08:33.789776  141884 default_sa.go:45] found service account: "default"
	I1212 01:08:33.789794  141884 default_sa.go:55] duration metric: took 3.13371ms for default service account to be created ...
	I1212 01:08:33.789801  141884 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:08:33.794118  141884 system_pods.go:86] 9 kube-system pods found
	I1212 01:08:33.794139  141884 system_pods.go:89] "coredns-7c65d6cfc9-9plj4" [d6e559d2-f6ac-4c21-b344-96266b6d3622] Running
	I1212 01:08:33.794145  141884 system_pods.go:89] "coredns-7c65d6cfc9-v6j4v" [710be306-064a-4506-9649-51853913362d] Running
	I1212 01:08:33.794149  141884 system_pods.go:89] "etcd-default-k8s-diff-port-076578" [76f28960-e9e5-4c95-86dc-371719adc5f2] Running
	I1212 01:08:33.794154  141884 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-076578" [a23c07de-eaf9-433a-bd36-b52cd77aa5d5] Running
	I1212 01:08:33.794157  141884 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-076578" [d53fdbba-7ab2-4f5f-8b3f-fa80c6858bc1] Running
	I1212 01:08:33.794161  141884 system_pods.go:89] "kube-proxy-gd2mq" [db6293f3-649a-4a96-8e4c-1028fa12b909] Running
	I1212 01:08:33.794165  141884 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-076578" [dc6a2eee-44bf-43ae-b0ea-ba56ebcceca7] Running
	I1212 01:08:33.794170  141884 system_pods.go:89] "metrics-server-6867b74b74-dkmwp" [ba79e06c-1471-43a1-9977-f8977b38fb46] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:08:33.794177  141884 system_pods.go:89] "storage-provisioner" [b67b42bd-ae67-4446-99ec-451650bd8c11] Running
	I1212 01:08:33.794185  141884 system_pods.go:126] duration metric: took 4.378791ms to wait for k8s-apps to be running ...
	I1212 01:08:33.794194  141884 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:08:33.794233  141884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:08:33.809257  141884 system_svc.go:56] duration metric: took 15.051528ms WaitForService to wait for kubelet
	I1212 01:08:33.809290  141884 kubeadm.go:582] duration metric: took 6.358073584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:08:33.809323  141884 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:08:33.813154  141884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:08:33.813174  141884 node_conditions.go:123] node cpu capacity is 2
	I1212 01:08:33.813183  141884 node_conditions.go:105] duration metric: took 3.85493ms to run NodePressure ...
	I1212 01:08:33.813194  141884 start.go:241] waiting for startup goroutines ...
	I1212 01:08:33.813200  141884 start.go:246] waiting for cluster config update ...
	I1212 01:08:33.813210  141884 start.go:255] writing updated cluster config ...
	I1212 01:08:33.813474  141884 ssh_runner.go:195] Run: rm -f paused
	I1212 01:08:33.862511  141884 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:08:33.864367  141884 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-076578" cluster and "default" namespace by default
	I1212 01:08:33.594621  141411 pod_ready.go:103] pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace has status "Ready":"False"
	I1212 01:08:34.589133  141411 pod_ready.go:82] duration metric: took 4m0.000384717s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" ...
	E1212 01:08:34.589166  141411 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-xzkbn" in "kube-system" namespace to be "Ready" (will not retry!)
	I1212 01:08:34.589184  141411 pod_ready.go:39] duration metric: took 4m8.190648334s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:08:34.589214  141411 kubeadm.go:597] duration metric: took 4m15.984656847s to restartPrimaryControlPlane
	W1212 01:08:34.589299  141411 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 01:08:34.589327  141411 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:08:39.900234  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:08:39.900966  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:39.901216  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:44.901739  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:44.901921  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:08:54.902652  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:08:54.902877  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:00.919650  141411 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.330292422s)
	I1212 01:09:00.919762  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:00.956649  141411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 01:09:00.976311  141411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:00.999339  141411 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:00.999364  141411 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:00.999413  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:01.013048  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:01.013112  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:01.027407  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:01.036801  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:01.036854  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:01.046865  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.056325  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:01.056390  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:01.066574  141411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:01.078080  141411 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:01.078130  141411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:01.088810  141411 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:01.249481  141411 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:09.318633  141411 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1212 01:09:09.318694  141411 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:09:09.318789  141411 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:09:09.318924  141411 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:09:09.319074  141411 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 01:09:09.319185  141411 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:09:09.320615  141411 out.go:235]   - Generating certificates and keys ...
	I1212 01:09:09.320710  141411 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:09:09.320803  141411 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:09:09.320886  141411 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:09:09.320957  141411 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:09:09.321061  141411 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:09:09.321118  141411 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:09:09.321188  141411 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:09:09.321249  141411 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:09:09.321334  141411 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:09:09.321442  141411 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:09:09.321516  141411 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:09:09.321611  141411 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:09:09.321698  141411 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:09:09.321775  141411 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 01:09:09.321849  141411 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:09:09.321924  141411 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:09:09.321973  141411 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:09:09.322099  141411 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:09:09.322204  141411 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:09:09.323661  141411 out.go:235]   - Booting up control plane ...
	I1212 01:09:09.323780  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:09:09.323864  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:09:09.323950  141411 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:09:09.324082  141411 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:09:09.324181  141411 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:09:09.324255  141411 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:09:09.324431  141411 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 01:09:09.324571  141411 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 01:09:09.324647  141411 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.39943ms
	I1212 01:09:09.324730  141411 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1212 01:09:09.324780  141411 kubeadm.go:310] [api-check] The API server is healthy after 5.001520724s
	I1212 01:09:09.324876  141411 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 01:09:09.325036  141411 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 01:09:09.325136  141411 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 01:09:09.325337  141411 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-242725 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 01:09:09.325401  141411 kubeadm.go:310] [bootstrap-token] Using token: k8uf20.0v0t2d7mhtmwxurz
	I1212 01:09:09.326715  141411 out.go:235]   - Configuring RBAC rules ...
	I1212 01:09:09.326840  141411 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 01:09:09.326938  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 01:09:09.327149  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 01:09:09.327329  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 01:09:09.327498  141411 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 01:09:09.327643  141411 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 01:09:09.327787  141411 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 01:09:09.327852  141411 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1212 01:09:09.327926  141411 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1212 01:09:09.327935  141411 kubeadm.go:310] 
	I1212 01:09:09.328027  141411 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1212 01:09:09.328036  141411 kubeadm.go:310] 
	I1212 01:09:09.328138  141411 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1212 01:09:09.328148  141411 kubeadm.go:310] 
	I1212 01:09:09.328183  141411 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1212 01:09:09.328253  141411 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 01:09:09.328302  141411 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 01:09:09.328308  141411 kubeadm.go:310] 
	I1212 01:09:09.328396  141411 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1212 01:09:09.328413  141411 kubeadm.go:310] 
	I1212 01:09:09.328478  141411 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 01:09:09.328489  141411 kubeadm.go:310] 
	I1212 01:09:09.328554  141411 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1212 01:09:09.328643  141411 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 01:09:09.328719  141411 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 01:09:09.328727  141411 kubeadm.go:310] 
	I1212 01:09:09.328797  141411 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 01:09:09.328885  141411 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1212 01:09:09.328894  141411 kubeadm.go:310] 
	I1212 01:09:09.328997  141411 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329096  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b \
	I1212 01:09:09.329120  141411 kubeadm.go:310] 	--control-plane 
	I1212 01:09:09.329126  141411 kubeadm.go:310] 
	I1212 01:09:09.329201  141411 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1212 01:09:09.329209  141411 kubeadm.go:310] 
	I1212 01:09:09.329276  141411 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k8uf20.0v0t2d7mhtmwxurz \
	I1212 01:09:09.329374  141411 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4154e7eae42e1564ca85ef7ac398e0375de6c20bae3d3359067e15f1e845457b 
	I1212 01:09:09.329386  141411 cni.go:84] Creating CNI manager for ""
	I1212 01:09:09.329393  141411 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1212 01:09:09.330870  141411 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1212 01:09:09.332191  141411 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1212 01:09:09.345593  141411 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1212 01:09:09.366177  141411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 01:09:09.366234  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:09.366252  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-242725 minikube.k8s.io/updated_at=2024_12_12T01_09_09_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7fa9733e2a94b345defa62bfa3d9dc225cfbd458 minikube.k8s.io/name=no-preload-242725 minikube.k8s.io/primary=true
	I1212 01:09:09.589709  141411 ops.go:34] apiserver oom_adj: -16
	I1212 01:09:09.589889  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.090703  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:10.590697  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.090698  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:11.590027  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.090413  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:12.590626  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.090322  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:13.590174  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.090032  141411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 01:09:14.233581  141411 kubeadm.go:1113] duration metric: took 4.867404479s to wait for elevateKubeSystemPrivileges
	I1212 01:09:14.233636  141411 kubeadm.go:394] duration metric: took 4m55.678870659s to StartCluster
	I1212 01:09:14.233674  141411 settings.go:142] acquiring lock: {Name:mkb7d82ac772f9b45e9858b92e5825a0b138364d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.233790  141411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 01:09:14.236087  141411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20083-86355/kubeconfig: {Name:mk831290e6fb645e9587b0c90ea962ad38ff74cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 01:09:14.236385  141411 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.222 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 01:09:14.236460  141411 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 01:09:14.236567  141411 addons.go:69] Setting storage-provisioner=true in profile "no-preload-242725"
	I1212 01:09:14.236583  141411 addons.go:69] Setting default-storageclass=true in profile "no-preload-242725"
	I1212 01:09:14.236610  141411 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-242725"
	I1212 01:09:14.236611  141411 addons.go:69] Setting metrics-server=true in profile "no-preload-242725"
	I1212 01:09:14.236631  141411 addons.go:234] Setting addon metrics-server=true in "no-preload-242725"
	W1212 01:09:14.236646  141411 addons.go:243] addon metrics-server should already be in state true
	I1212 01:09:14.236682  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.236588  141411 addons.go:234] Setting addon storage-provisioner=true in "no-preload-242725"
	I1212 01:09:14.236687  141411 config.go:182] Loaded profile config "no-preload-242725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	W1212 01:09:14.236712  141411 addons.go:243] addon storage-provisioner should already be in state true
	I1212 01:09:14.236838  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.237093  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237141  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237185  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237101  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.237227  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237235  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.237863  141411 out.go:177] * Verifying Kubernetes components...
	I1212 01:09:14.239284  141411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 01:09:14.254182  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37123
	I1212 01:09:14.254405  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I1212 01:09:14.254418  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33769
	I1212 01:09:14.254742  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254857  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.254874  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255388  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255415  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255364  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.255439  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.255803  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255814  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.255807  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.256218  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.256360  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256396  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.256524  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.256567  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.259313  141411 addons.go:234] Setting addon default-storageclass=true in "no-preload-242725"
	W1212 01:09:14.259330  141411 addons.go:243] addon default-storageclass should already be in state true
	I1212 01:09:14.259357  141411 host.go:66] Checking if "no-preload-242725" exists ...
	I1212 01:09:14.259575  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.259621  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.273148  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40735
	I1212 01:09:14.273601  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.273909  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42237
	I1212 01:09:14.274174  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274200  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274282  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.274560  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.274785  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.274801  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.274866  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.275126  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.275280  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.276840  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.277013  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.278945  141411 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 01:09:14.279016  141411 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1212 01:09:14.903981  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:14.904298  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:14.280219  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 01:09:14.280239  141411 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 01:09:14.280268  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.280440  141411 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.280450  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 01:09:14.280464  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.281368  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I1212 01:09:14.282054  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.282652  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.282673  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.283314  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.283947  141411 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 01:09:14.283990  141411 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 01:09:14.284230  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284232  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.284802  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.284830  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285052  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285088  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.285106  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.285247  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285458  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285483  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.285619  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.285624  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.285761  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.285880  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.323872  141411 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42503
	I1212 01:09:14.324336  141411 main.go:141] libmachine: () Calling .GetVersion
	I1212 01:09:14.324884  141411 main.go:141] libmachine: Using API Version  1
	I1212 01:09:14.324906  141411 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 01:09:14.325248  141411 main.go:141] libmachine: () Calling .GetMachineName
	I1212 01:09:14.325437  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetState
	I1212 01:09:14.326991  141411 main.go:141] libmachine: (no-preload-242725) Calling .DriverName
	I1212 01:09:14.327217  141411 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.327237  141411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 01:09:14.327258  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHHostname
	I1212 01:09:14.330291  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.330895  141411 main.go:141] libmachine: (no-preload-242725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:6f:4a", ip: ""} in network mk-no-preload-242725: {Iface:virbr3 ExpiryTime:2024-12-12 02:03:52 +0000 UTC Type:0 Mac:52:54:00:ab:6f:4a Iaid: IPaddr:192.168.61.222 Prefix:24 Hostname:no-preload-242725 Clientid:01:52:54:00:ab:6f:4a}
	I1212 01:09:14.330910  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHPort
	I1212 01:09:14.330926  141411 main.go:141] libmachine: (no-preload-242725) DBG | domain no-preload-242725 has defined IP address 192.168.61.222 and MAC address 52:54:00:ab:6f:4a in network mk-no-preload-242725
	I1212 01:09:14.331062  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHKeyPath
	I1212 01:09:14.331219  141411 main.go:141] libmachine: (no-preload-242725) Calling .GetSSHUsername
	I1212 01:09:14.331343  141411 sshutil.go:53] new ssh client: &{IP:192.168.61.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/no-preload-242725/id_rsa Username:docker}
	I1212 01:09:14.411182  141411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 01:09:14.454298  141411 node_ready.go:35] waiting up to 6m0s for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467328  141411 node_ready.go:49] node "no-preload-242725" has status "Ready":"True"
	I1212 01:09:14.467349  141411 node_ready.go:38] duration metric: took 13.017274ms for node "no-preload-242725" to be "Ready" ...
	I1212 01:09:14.467359  141411 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:14.482865  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:14.557685  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 01:09:14.594366  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 01:09:14.602730  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 01:09:14.602760  141411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1212 01:09:14.666446  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 01:09:14.666474  141411 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 01:09:14.746040  141411 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.746075  141411 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 01:09:14.799479  141411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 01:09:14.862653  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.862688  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863687  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.863706  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.863721  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.863730  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.863740  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:14.863988  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.864007  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878604  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:14.878630  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:14.878903  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:14.878944  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:14.878914  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.914665  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.320255607s)
	I1212 01:09:15.914726  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.914741  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915158  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:15.915204  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915219  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:15.915236  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:15.915249  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:15.915499  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:15.915528  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.106582  141411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.307047373s)
	I1212 01:09:16.106635  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.106652  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107000  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107020  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107030  141411 main.go:141] libmachine: Making call to close driver server
	I1212 01:09:16.107037  141411 main.go:141] libmachine: (no-preload-242725) Calling .Close
	I1212 01:09:16.107298  141411 main.go:141] libmachine: Successfully made call to close driver server
	I1212 01:09:16.107317  141411 main.go:141] libmachine: Making call to close connection to plugin binary
	I1212 01:09:16.107328  141411 addons.go:475] Verifying addon metrics-server=true in "no-preload-242725"
	I1212 01:09:16.107305  141411 main.go:141] libmachine: (no-preload-242725) DBG | Closing plugin on server side
	I1212 01:09:16.108981  141411 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1212 01:09:16.110608  141411 addons.go:510] duration metric: took 1.874161814s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1212 01:09:16.498983  141411 pod_ready.go:103] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"False"
	I1212 01:09:16.989762  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:16.989784  141411 pod_ready.go:82] duration metric: took 2.506893862s for pod "coredns-7c65d6cfc9-kv2c6" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:16.989795  141411 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996560  141411 pod_ready.go:93] pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:17.996582  141411 pod_ready.go:82] duration metric: took 1.00678165s for pod "coredns-7c65d6cfc9-tflp9" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:17.996593  141411 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002275  141411 pod_ready.go:93] pod "etcd-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.002294  141411 pod_ready.go:82] duration metric: took 5.694407ms for pod "etcd-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.002308  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006942  141411 pod_ready.go:93] pod "kube-apiserver-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.006965  141411 pod_ready.go:82] duration metric: took 4.650802ms for pod "kube-apiserver-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.006978  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011581  141411 pod_ready.go:93] pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.011621  141411 pod_ready.go:82] duration metric: took 4.634646ms for pod "kube-controller-manager-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.011634  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187112  141411 pod_ready.go:93] pod "kube-proxy-5kc2s" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.187143  141411 pod_ready.go:82] duration metric: took 175.498685ms for pod "kube-proxy-5kc2s" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.187156  141411 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.586974  141411 pod_ready.go:93] pod "kube-scheduler-no-preload-242725" in "kube-system" namespace has status "Ready":"True"
	I1212 01:09:18.587003  141411 pod_ready.go:82] duration metric: took 399.836187ms for pod "kube-scheduler-no-preload-242725" in "kube-system" namespace to be "Ready" ...
	I1212 01:09:18.587012  141411 pod_ready.go:39] duration metric: took 4.119642837s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1212 01:09:18.587032  141411 api_server.go:52] waiting for apiserver process to appear ...
	I1212 01:09:18.587091  141411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 01:09:18.603406  141411 api_server.go:72] duration metric: took 4.366985373s to wait for apiserver process to appear ...
	I1212 01:09:18.603446  141411 api_server.go:88] waiting for apiserver healthz status ...
	I1212 01:09:18.603473  141411 api_server.go:253] Checking apiserver healthz at https://192.168.61.222:8443/healthz ...
	I1212 01:09:18.609003  141411 api_server.go:279] https://192.168.61.222:8443/healthz returned 200:
	ok
	I1212 01:09:18.609950  141411 api_server.go:141] control plane version: v1.31.2
	I1212 01:09:18.609968  141411 api_server.go:131] duration metric: took 6.513408ms to wait for apiserver health ...
	I1212 01:09:18.609976  141411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 01:09:18.790460  141411 system_pods.go:59] 9 kube-system pods found
	I1212 01:09:18.790494  141411 system_pods.go:61] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:18.790502  141411 system_pods.go:61] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:18.790507  141411 system_pods.go:61] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:18.790510  141411 system_pods.go:61] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:18.790515  141411 system_pods.go:61] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:18.790520  141411 system_pods.go:61] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:18.790525  141411 system_pods.go:61] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:18.790534  141411 system_pods.go:61] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:18.790540  141411 system_pods.go:61] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:18.790556  141411 system_pods.go:74] duration metric: took 180.570066ms to wait for pod list to return data ...
	I1212 01:09:18.790566  141411 default_sa.go:34] waiting for default service account to be created ...
	I1212 01:09:18.987130  141411 default_sa.go:45] found service account: "default"
	I1212 01:09:18.987172  141411 default_sa.go:55] duration metric: took 196.594497ms for default service account to be created ...
	I1212 01:09:18.987185  141411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 01:09:19.189233  141411 system_pods.go:86] 9 kube-system pods found
	I1212 01:09:19.189262  141411 system_pods.go:89] "coredns-7c65d6cfc9-kv2c6" [39249ae0-a54d-455d-a2ce-870c71fd2c03] Running
	I1212 01:09:19.189267  141411 system_pods.go:89] "coredns-7c65d6cfc9-tflp9" [edfd3f91-47ce-497c-ae3f-2c200e084be5] Running
	I1212 01:09:19.189271  141411 system_pods.go:89] "etcd-no-preload-242725" [78e64e5d-b658-4080-b37a-2daa0a588d6d] Running
	I1212 01:09:19.189274  141411 system_pods.go:89] "kube-apiserver-no-preload-242725" [9729a997-671e-44c3-bc1e-4b125192c076] Running
	I1212 01:09:19.189290  141411 system_pods.go:89] "kube-controller-manager-no-preload-242725" [e387c6c6-e9a8-4ce0-a574-ae7e64c18cb8] Running
	I1212 01:09:19.189294  141411 system_pods.go:89] "kube-proxy-5kc2s" [965f5b8a-25d3-40ed-89ee-9a4450864b73] Running
	I1212 01:09:19.189300  141411 system_pods.go:89] "kube-scheduler-no-preload-242725" [d1f985ef-e175-45e7-9974-4366b53f18d2] Running
	I1212 01:09:19.189308  141411 system_pods.go:89] "metrics-server-6867b74b74-m2g6s" [b0879479-4335-4782-b15a-83f22d85139e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 01:09:19.189318  141411 system_pods.go:89] "storage-provisioner" [76e9f3eb-72ea-49a3-9711-6a5f98455322] Running
	I1212 01:09:19.189331  141411 system_pods.go:126] duration metric: took 202.137957ms to wait for k8s-apps to be running ...
	I1212 01:09:19.189341  141411 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 01:09:19.189391  141411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:19.204241  141411 system_svc.go:56] duration metric: took 14.889522ms WaitForService to wait for kubelet
	I1212 01:09:19.204272  141411 kubeadm.go:582] duration metric: took 4.967858935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 01:09:19.204289  141411 node_conditions.go:102] verifying NodePressure condition ...
	I1212 01:09:19.387735  141411 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1212 01:09:19.387760  141411 node_conditions.go:123] node cpu capacity is 2
	I1212 01:09:19.387768  141411 node_conditions.go:105] duration metric: took 183.47486ms to run NodePressure ...
	I1212 01:09:19.387780  141411 start.go:241] waiting for startup goroutines ...
	I1212 01:09:19.387787  141411 start.go:246] waiting for cluster config update ...
	I1212 01:09:19.387796  141411 start.go:255] writing updated cluster config ...
	I1212 01:09:19.388041  141411 ssh_runner.go:195] Run: rm -f paused
	I1212 01:09:19.437923  141411 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1212 01:09:19.439913  141411 out.go:177] * Done! kubectl is now configured to use "no-preload-242725" cluster and "default" namespace by default
	I1212 01:09:54.906484  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:09:54.906805  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:09:54.906828  142150 kubeadm.go:310] 
	I1212 01:09:54.906866  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:09:54.906908  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:09:54.906915  142150 kubeadm.go:310] 
	I1212 01:09:54.906944  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:09:54.906974  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:09:54.907087  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:09:54.907106  142150 kubeadm.go:310] 
	I1212 01:09:54.907205  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:09:54.907240  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:09:54.907271  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:09:54.907277  142150 kubeadm.go:310] 
	I1212 01:09:54.907369  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:09:54.907474  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:09:54.907499  142150 kubeadm.go:310] 
	I1212 01:09:54.907659  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:09:54.907749  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:09:54.907815  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:09:54.907920  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:09:54.907937  142150 kubeadm.go:310] 
	I1212 01:09:54.909051  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:09:54.909171  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:09:54.909277  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1212 01:09:54.909442  142150 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 01:09:54.909493  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 01:09:55.377787  142150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 01:09:55.393139  142150 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 01:09:55.403640  142150 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 01:09:55.403664  142150 kubeadm.go:157] found existing configuration files:
	
	I1212 01:09:55.403707  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 01:09:55.413315  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 01:09:55.413394  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 01:09:55.422954  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 01:09:55.432010  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 01:09:55.432073  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 01:09:55.441944  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.451991  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 01:09:55.452064  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 01:09:55.461584  142150 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 01:09:55.471118  142150 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 01:09:55.471191  142150 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 01:09:55.480829  142150 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1212 01:09:55.713359  142150 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 01:11:51.592618  142150 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1212 01:11:51.592716  142150 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1212 01:11:51.594538  142150 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1212 01:11:51.594601  142150 kubeadm.go:310] [preflight] Running pre-flight checks
	I1212 01:11:51.594684  142150 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 01:11:51.594835  142150 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 01:11:51.594954  142150 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1212 01:11:51.595052  142150 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 01:11:51.597008  142150 out.go:235]   - Generating certificates and keys ...
	I1212 01:11:51.597118  142150 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1212 01:11:51.597173  142150 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1212 01:11:51.597241  142150 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 01:11:51.597297  142150 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1212 01:11:51.597359  142150 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 01:11:51.597427  142150 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1212 01:11:51.597508  142150 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1212 01:11:51.597585  142150 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1212 01:11:51.597681  142150 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 01:11:51.597766  142150 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 01:11:51.597804  142150 kubeadm.go:310] [certs] Using the existing "sa" key
	I1212 01:11:51.597869  142150 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 01:11:51.597941  142150 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 01:11:51.598021  142150 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 01:11:51.598119  142150 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 01:11:51.598207  142150 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 01:11:51.598320  142150 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 01:11:51.598427  142150 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 01:11:51.598485  142150 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1212 01:11:51.598577  142150 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 01:11:51.599918  142150 out.go:235]   - Booting up control plane ...
	I1212 01:11:51.600024  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 01:11:51.600148  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 01:11:51.600229  142150 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 01:11:51.600341  142150 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 01:11:51.600507  142150 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1212 01:11:51.600572  142150 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1212 01:11:51.600672  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.600878  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.600992  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601222  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601285  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601456  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601515  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.601702  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.601804  142150 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1212 01:11:51.602020  142150 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1212 01:11:51.602033  142150 kubeadm.go:310] 
	I1212 01:11:51.602093  142150 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1212 01:11:51.602153  142150 kubeadm.go:310] 		timed out waiting for the condition
	I1212 01:11:51.602163  142150 kubeadm.go:310] 
	I1212 01:11:51.602211  142150 kubeadm.go:310] 	This error is likely caused by:
	I1212 01:11:51.602274  142150 kubeadm.go:310] 		- The kubelet is not running
	I1212 01:11:51.602393  142150 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 01:11:51.602416  142150 kubeadm.go:310] 
	I1212 01:11:51.602561  142150 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 01:11:51.602618  142150 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1212 01:11:51.602651  142150 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1212 01:11:51.602661  142150 kubeadm.go:310] 
	I1212 01:11:51.602794  142150 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1212 01:11:51.602919  142150 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1212 01:11:51.602928  142150 kubeadm.go:310] 
	I1212 01:11:51.603023  142150 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1212 01:11:51.603110  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1212 01:11:51.603176  142150 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1212 01:11:51.603237  142150 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1212 01:11:51.603252  142150 kubeadm.go:310] 
	I1212 01:11:51.603327  142150 kubeadm.go:394] duration metric: took 8m2.544704165s to StartCluster
	I1212 01:11:51.603376  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 01:11:51.603447  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 01:11:51.648444  142150 cri.go:89] found id: ""
	I1212 01:11:51.648488  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.648501  142150 logs.go:284] No container was found matching "kube-apiserver"
	I1212 01:11:51.648509  142150 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 01:11:51.648573  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 01:11:51.687312  142150 cri.go:89] found id: ""
	I1212 01:11:51.687341  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.687354  142150 logs.go:284] No container was found matching "etcd"
	I1212 01:11:51.687362  142150 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 01:11:51.687419  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 01:11:51.726451  142150 cri.go:89] found id: ""
	I1212 01:11:51.726505  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.726521  142150 logs.go:284] No container was found matching "coredns"
	I1212 01:11:51.726529  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 01:11:51.726594  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 01:11:51.763077  142150 cri.go:89] found id: ""
	I1212 01:11:51.763112  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.763125  142150 logs.go:284] No container was found matching "kube-scheduler"
	I1212 01:11:51.763132  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 01:11:51.763194  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 01:11:51.801102  142150 cri.go:89] found id: ""
	I1212 01:11:51.801139  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.801152  142150 logs.go:284] No container was found matching "kube-proxy"
	I1212 01:11:51.801160  142150 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 01:11:51.801220  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 01:11:51.838249  142150 cri.go:89] found id: ""
	I1212 01:11:51.838275  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.838283  142150 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 01:11:51.838290  142150 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 01:11:51.838357  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 01:11:51.874958  142150 cri.go:89] found id: ""
	I1212 01:11:51.874989  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.874997  142150 logs.go:284] No container was found matching "kindnet"
	I1212 01:11:51.875007  142150 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1212 01:11:51.875106  142150 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1212 01:11:51.911408  142150 cri.go:89] found id: ""
	I1212 01:11:51.911440  142150 logs.go:282] 0 containers: []
	W1212 01:11:51.911451  142150 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1212 01:11:51.911465  142150 logs.go:123] Gathering logs for describe nodes ...
	I1212 01:11:51.911483  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 01:11:51.997485  142150 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 01:11:51.997516  142150 logs.go:123] Gathering logs for CRI-O ...
	I1212 01:11:51.997532  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 01:11:52.119827  142150 logs.go:123] Gathering logs for container status ...
	I1212 01:11:52.119869  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 01:11:52.162270  142150 logs.go:123] Gathering logs for kubelet ...
	I1212 01:11:52.162298  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 01:11:52.215766  142150 logs.go:123] Gathering logs for dmesg ...
	I1212 01:11:52.215805  142150 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1212 01:11:52.231106  142150 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1212 01:11:52.231187  142150 out.go:270] * 
	W1212 01:11:52.231316  142150 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.231351  142150 out.go:270] * 
	W1212 01:11:52.232281  142150 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 01:11:52.235692  142150 out.go:201] 
	W1212 01:11:52.236852  142150 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 01:11:52.236890  142150 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 01:11:52.236910  142150 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 01:11:52.238333  142150 out.go:201] 
	
	
	==> CRI-O <==
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.523561550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966588523535626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f64c581-5c38-468e-8b50-104f70a46e1d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.524241577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fac99b74-9fe1-4aa0-8e38-21b815703015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.524313946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fac99b74-9fe1-4aa0-8e38-21b815703015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.524352342Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fac99b74-9fe1-4aa0-8e38-21b815703015 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.560613542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c81ff2f-af0f-4548-9c85-fd230189b254 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.560718562Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c81ff2f-af0f-4548-9c85-fd230189b254 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.562589825Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d18159c-6682-49bd-91e9-c95617282fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.563000880Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966588562971915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d18159c-6682-49bd-91e9-c95617282fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.563570000Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=924ddd94-53be-4ad6-a95d-65b13bd64011 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.563654196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=924ddd94-53be-4ad6-a95d-65b13bd64011 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.563708725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=924ddd94-53be-4ad6-a95d-65b13bd64011 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.598179323Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=236d42b8-ed3b-49ba-8104-1e6957a6449b name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.598283481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=236d42b8-ed3b-49ba-8104-1e6957a6449b name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.599509128Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5cb5bcb-3965-4ceb-8056-93161988101f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.599966268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966588599939807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5cb5bcb-3965-4ceb-8056-93161988101f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.600540922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aff80568-8846-4d2e-bd8a-5e0970edf26a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.600627713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aff80568-8846-4d2e-bd8a-5e0970edf26a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.600671682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aff80568-8846-4d2e-bd8a-5e0970edf26a name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.634685363Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=872cba7b-bb48-4d82-a384-79987f5d9e29 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.634756567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=872cba7b-bb48-4d82-a384-79987f5d9e29 name=/runtime.v1.RuntimeService/Version
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.635867226Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e0f63673-7304-4e11-9ac6-a8f9b3ea097b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.636324296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733966588636302884,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e0f63673-7304-4e11-9ac6-a8f9b3ea097b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.636883611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d28a0141-fee7-40e8-b219-85af6671dc08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.636959294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d28a0141-fee7-40e8-b219-85af6671dc08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 12 01:23:08 old-k8s-version-738445 crio[636]: time="2024-12-12 01:23:08.636990573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d28a0141-fee7-40e8-b219-85af6671dc08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 01:03] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055186] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.154525] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.857593] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.677106] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.928690] systemd-fstab-generator[562]: Ignoring "noauto" option for root device
	[  +0.061807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069660] systemd-fstab-generator[574]: Ignoring "noauto" option for root device
	[  +0.204368] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.145806] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.275893] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +7.875714] systemd-fstab-generator[884]: Ignoring "noauto" option for root device
	[  +0.056265] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.046586] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[Dec12 01:04] kauditd_printk_skb: 46 callbacks suppressed
	[Dec12 01:07] systemd-fstab-generator[5072]: Ignoring "noauto" option for root device
	[Dec12 01:09] systemd-fstab-generator[5350]: Ignoring "noauto" option for root device
	[  +0.066882] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:23:08 up 19 min,  0 users,  load average: 0.06, 0.05, 0.05
	Linux old-k8s-version-738445 5.10.207 #1 SMP Wed Dec 11 21:54:26 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000bc6090)
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: goroutine 169 [select]:
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000be3ef0, 0x4f0ac20, 0xc00094b9a0, 0x1, 0xc0001000c0)
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00081ec40, 0xc0001000c0)
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bc41f0, 0xc000ba4e00)
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 12 01:23:04 old-k8s-version-738445 kubelet[6827]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 12 01:23:04 old-k8s-version-738445 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 12 01:23:04 old-k8s-version-738445 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 01:23:05 old-k8s-version-738445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Dec 12 01:23:05 old-k8s-version-738445 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 12 01:23:05 old-k8s-version-738445 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 12 01:23:05 old-k8s-version-738445 kubelet[6837]: I1212 01:23:05.577415    6837 server.go:416] Version: v1.20.0
	Dec 12 01:23:05 old-k8s-version-738445 kubelet[6837]: I1212 01:23:05.577766    6837 server.go:837] Client rotation is on, will bootstrap in background
	Dec 12 01:23:05 old-k8s-version-738445 kubelet[6837]: I1212 01:23:05.579710    6837 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 12 01:23:05 old-k8s-version-738445 kubelet[6837]: W1212 01:23:05.580859    6837 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 12 01:23:05 old-k8s-version-738445 kubelet[6837]: I1212 01:23:05.581254    6837 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 2 (236.203219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-738445" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (131.06s)

                                                
                                    

Test pass (250/320)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 36.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.2/json-events 17.51
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.14
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.6
22 TestOffline 88.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 206.32
31 TestAddons/serial/GCPAuth/Namespaces 1.91
32 TestAddons/serial/GCPAuth/FakeCredentials 13.54
35 TestAddons/parallel/Registry 20.23
37 TestAddons/parallel/InspektorGadget 10.73
40 TestAddons/parallel/CSI 60.99
41 TestAddons/parallel/Headlamp 20.86
42 TestAddons/parallel/CloudSpanner 6.78
43 TestAddons/parallel/LocalPath 17.51
44 TestAddons/parallel/NvidiaDevicePlugin 6.81
45 TestAddons/parallel/Yakd 10.76
48 TestCertOptions 62.37
49 TestCertExpiration 262.58
51 TestForceSystemdFlag 76.31
52 TestForceSystemdEnv 56.22
54 TestKVMDriverInstallOrUpdate 8.05
58 TestErrorSpam/setup 43.79
59 TestErrorSpam/start 0.37
60 TestErrorSpam/status 0.76
61 TestErrorSpam/pause 1.61
62 TestErrorSpam/unpause 1.79
63 TestErrorSpam/stop 95.05
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 57.59
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 397.63
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
75 TestFunctional/serial/CacheCmd/cache/add_local 2.82
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 54.01
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.55
86 TestFunctional/serial/LogsFileCmd 1.51
87 TestFunctional/serial/InvalidService 5.33
89 TestFunctional/parallel/ConfigCmd 0.33
90 TestFunctional/parallel/DashboardCmd 16.2
91 TestFunctional/parallel/DryRun 0.28
92 TestFunctional/parallel/InternationalLanguage 0.14
93 TestFunctional/parallel/StatusCmd 0.78
97 TestFunctional/parallel/ServiceCmdConnect 50.46
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 49.76
101 TestFunctional/parallel/SSHCmd 0.4
102 TestFunctional/parallel/CpCmd 1.28
103 TestFunctional/parallel/MySQL 23.98
104 TestFunctional/parallel/FileSync 0.21
105 TestFunctional/parallel/CertSync 1.3
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
113 TestFunctional/parallel/License 0.83
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.47
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
129 TestFunctional/parallel/ImageCommands/ImageBuild 6.31
130 TestFunctional/parallel/ImageCommands/Setup 2.61
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
135 TestFunctional/parallel/ProfileCmd/profile_list 0.35
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.55
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.49
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.81
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.81
141 TestFunctional/parallel/ImageCommands/ImageRemove 1.14
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.04
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
144 TestFunctional/parallel/ServiceCmd/DeployApp 7.2
145 TestFunctional/parallel/MountCmd/any-port 11
146 TestFunctional/parallel/ServiceCmd/List 0.48
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
149 TestFunctional/parallel/ServiceCmd/Format 0.33
150 TestFunctional/parallel/ServiceCmd/URL 0.3
151 TestFunctional/parallel/MountCmd/specific-port 1.69
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.38
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 204.66
160 TestMultiControlPlane/serial/DeployApp 9.31
161 TestMultiControlPlane/serial/PingHostFromPods 1.23
162 TestMultiControlPlane/serial/AddWorkerNode 58.44
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
165 TestMultiControlPlane/serial/CopyFile 13.04
171 TestMultiControlPlane/serial/DeleteSecondaryNode 16.7
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
174 TestMultiControlPlane/serial/RestartCluster 272.99
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
176 TestMultiControlPlane/serial/AddSecondaryNode 78.57
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
181 TestJSONOutput/start/Command 86.79
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.71
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.62
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 7.36
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.2
209 TestMainNoArgs 0.05
210 TestMinikubeProfile 93.69
213 TestMountStart/serial/StartWithMountFirst 28.06
214 TestMountStart/serial/VerifyMountFirst 0.37
215 TestMountStart/serial/StartWithMountSecond 28.96
216 TestMountStart/serial/VerifyMountSecond 0.38
217 TestMountStart/serial/DeleteFirst 0.88
218 TestMountStart/serial/VerifyMountPostDelete 0.39
219 TestMountStart/serial/Stop 1.28
220 TestMountStart/serial/RestartStopped 24.86
221 TestMountStart/serial/VerifyMountPostStop 0.37
224 TestMultiNode/serial/FreshStart2Nodes 120.56
225 TestMultiNode/serial/DeployApp2Nodes 10.01
226 TestMultiNode/serial/PingHostFrom2Pods 0.79
227 TestMultiNode/serial/AddNode 54.68
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.58
230 TestMultiNode/serial/CopyFile 7.32
231 TestMultiNode/serial/StopNode 2.41
232 TestMultiNode/serial/StartAfterStop 41.59
234 TestMultiNode/serial/DeleteNode 2.24
236 TestMultiNode/serial/RestartMultiNode 205.54
237 TestMultiNode/serial/ValidateNameConflict 43.96
244 TestScheduledStopUnix 114.13
248 TestRunningBinaryUpgrade 188.78
253 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/StartWithK8s 94.57
263 TestPause/serial/Start 89.36
264 TestNoKubernetes/serial/StartWithStopK8s 38.92
265 TestNoKubernetes/serial/Start 29.32
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
267 TestNoKubernetes/serial/ProfileList 1.72
268 TestNoKubernetes/serial/Stop 1.35
269 TestNoKubernetes/serial/StartNoArgs 25.69
270 TestPause/serial/SecondStartNoReconfiguration 45.1
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
279 TestNetworkPlugins/group/false 3.37
283 TestStoppedBinaryUpgrade/Setup 3.19
284 TestStoppedBinaryUpgrade/Upgrade 128.64
285 TestPause/serial/Pause 0.7
286 TestPause/serial/VerifyStatus 0.24
287 TestPause/serial/Unpause 0.65
288 TestPause/serial/PauseAgain 0.84
289 TestPause/serial/DeletePaused 1.82
290 TestPause/serial/VerifyDeletedResources 0.64
291 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
295 TestStartStop/group/no-preload/serial/FirstStart 89.49
297 TestStartStop/group/embed-certs/serial/FirstStart 65.39
298 TestStartStop/group/no-preload/serial/DeployApp 14.29
299 TestStartStop/group/embed-certs/serial/DeployApp 13.29
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
302 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.67
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.25
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
313 TestStartStop/group/no-preload/serial/SecondStart 677.66
314 TestStartStop/group/embed-certs/serial/SecondStart 611.55
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 557.59
317 TestStartStop/group/old-k8s-version/serial/Stop 2.31
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/newest-cni/serial/FirstStart 49.26
330 TestNetworkPlugins/group/auto/Start 84.7
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
333 TestStartStop/group/newest-cni/serial/Stop 92.64
334 TestNetworkPlugins/group/kindnet/Start 72.11
335 TestNetworkPlugins/group/auto/KubeletFlags 0.21
336 TestNetworkPlugins/group/auto/NetCatPod 11.25
337 TestNetworkPlugins/group/auto/DNS 0.16
338 TestNetworkPlugins/group/auto/Localhost 0.13
339 TestNetworkPlugins/group/auto/HairPin 0.14
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.28
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
343 TestStartStop/group/newest-cni/serial/SecondStart 37.69
344 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
345 TestNetworkPlugins/group/calico/Start 104.71
346 TestNetworkPlugins/group/custom-flannel/Start 113.79
347 TestNetworkPlugins/group/kindnet/DNS 0.18
348 TestNetworkPlugins/group/kindnet/Localhost 0.15
349 TestNetworkPlugins/group/kindnet/HairPin 0.13
350 TestNetworkPlugins/group/enable-default-cni/Start 106.47
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
354 TestStartStop/group/newest-cni/serial/Pause 2.33
355 TestNetworkPlugins/group/flannel/Start 140.85
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.23
358 TestNetworkPlugins/group/calico/NetCatPod 12.5
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
361 TestNetworkPlugins/group/calico/DNS 0.18
362 TestNetworkPlugins/group/calico/Localhost 0.16
363 TestNetworkPlugins/group/calico/HairPin 0.14
364 TestNetworkPlugins/group/custom-flannel/DNS 0.18
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
367 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
368 TestNetworkPlugins/group/enable-default-cni/NetCatPod 14.27
369 TestNetworkPlugins/group/bridge/Start 91.13
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
375 TestNetworkPlugins/group/flannel/NetCatPod 11.21
376 TestNetworkPlugins/group/flannel/DNS 0.16
377 TestNetworkPlugins/group/flannel/Localhost 0.12
378 TestNetworkPlugins/group/flannel/HairPin 0.15
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
380 TestNetworkPlugins/group/bridge/NetCatPod 10.21
381 TestNetworkPlugins/group/bridge/DNS 0.16
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (36.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-531520 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-531520 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (36.819369628s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (36.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1211 23:33:58.767437   93600 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1211 23:33:58.767566   93600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-531520
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-531520: exit status 85 (67.142908ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-531520 | jenkins | v1.34.0 | 11 Dec 24 23:33 UTC |          |
	|         | -p download-only-531520        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:33:21
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:33:21.990173   93613 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:33:21.990446   93613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:33:21.990456   93613 out.go:358] Setting ErrFile to fd 2...
	I1211 23:33:21.990464   93613 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:33:21.990648   93613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	W1211 23:33:21.990797   93613 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20083-86355/.minikube/config/config.json: open /home/jenkins/minikube-integration/20083-86355/.minikube/config/config.json: no such file or directory
	I1211 23:33:21.991425   93613 out.go:352] Setting JSON to true
	I1211 23:33:21.992398   93613 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8144,"bootTime":1733951858,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:33:21.992475   93613 start.go:139] virtualization: kvm guest
	I1211 23:33:21.994894   93613 out.go:97] [download-only-531520] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1211 23:33:21.995084   93613 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball: no such file or directory
	I1211 23:33:21.995096   93613 notify.go:220] Checking for updates...
	I1211 23:33:21.996521   93613 out.go:169] MINIKUBE_LOCATION=20083
	I1211 23:33:21.998088   93613 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:33:21.999617   93613 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:33:22.000970   93613 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:33:22.002288   93613 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1211 23:33:22.005111   93613 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:33:22.005366   93613 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:33:22.039906   93613 out.go:97] Using the kvm2 driver based on user configuration
	I1211 23:33:22.039938   93613 start.go:297] selected driver: kvm2
	I1211 23:33:22.039947   93613 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:33:22.040343   93613 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:33:22.040438   93613 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:33:22.055878   93613 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:33:22.055927   93613 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:33:22.056462   93613 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1211 23:33:22.056633   93613 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:33:22.056669   93613 cni.go:84] Creating CNI manager for ""
	I1211 23:33:22.056735   93613 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:33:22.056749   93613 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:33:22.056808   93613 start.go:340] cluster config:
	{Name:download-only-531520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-531520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:33:22.056994   93613 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:33:22.058716   93613 out.go:97] Downloading VM boot image ...
	I1211 23:33:22.058771   93613 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/iso/amd64/minikube-v1.34.0-1733936888-20083-amd64.iso
	I1211 23:33:36.637757   93613 out.go:97] Starting "download-only-531520" primary control-plane node in "download-only-531520" cluster
	I1211 23:33:36.637780   93613 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1211 23:33:36.794732   93613 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1211 23:33:36.794797   93613 cache.go:56] Caching tarball of preloaded images
	I1211 23:33:36.794983   93613 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1211 23:33:36.797101   93613 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1211 23:33:36.797119   93613 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1211 23:33:36.953440   93613 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1211 23:33:56.861806   93613 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1211 23:33:56.861929   93613 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-531520 host does not exist
	  To start a cluster, run: "minikube start -p download-only-531520"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-531520
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (17.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-596435 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-596435 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (17.508657448s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (17.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1211 23:34:16.622789   93600 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1211 23:34:16.622834   93600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-596435
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-596435: exit status 85 (66.071452ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-531520 | jenkins | v1.34.0 | 11 Dec 24 23:33 UTC |                     |
	|         | -p download-only-531520        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 11 Dec 24 23:33 UTC | 11 Dec 24 23:33 UTC |
	| delete  | -p download-only-531520        | download-only-531520 | jenkins | v1.34.0 | 11 Dec 24 23:33 UTC | 11 Dec 24 23:33 UTC |
	| start   | -o=json --download-only        | download-only-596435 | jenkins | v1.34.0 | 11 Dec 24 23:33 UTC |                     |
	|         | -p download-only-596435        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/11 23:33:59
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.23.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1211 23:33:59.158651   93907 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:33:59.158934   93907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:33:59.158946   93907 out.go:358] Setting ErrFile to fd 2...
	I1211 23:33:59.158953   93907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:33:59.159174   93907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:33:59.159811   93907 out.go:352] Setting JSON to true
	I1211 23:33:59.160744   93907 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":8181,"bootTime":1733951858,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:33:59.160838   93907 start.go:139] virtualization: kvm guest
	I1211 23:33:59.162989   93907 out.go:97] [download-only-596435] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:33:59.163137   93907 notify.go:220] Checking for updates...
	I1211 23:33:59.164599   93907 out.go:169] MINIKUBE_LOCATION=20083
	I1211 23:33:59.166465   93907 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:33:59.167797   93907 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:33:59.169103   93907 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:33:59.170396   93907 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1211 23:33:59.172614   93907 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1211 23:33:59.172844   93907 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:33:59.206628   93907 out.go:97] Using the kvm2 driver based on user configuration
	I1211 23:33:59.206668   93907 start.go:297] selected driver: kvm2
	I1211 23:33:59.206677   93907 start.go:901] validating driver "kvm2" against <nil>
	I1211 23:33:59.207045   93907 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:33:59.207147   93907 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20083-86355/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1211 23:33:59.223113   93907 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1211 23:33:59.223198   93907 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1211 23:33:59.223946   93907 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1211 23:33:59.224162   93907 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1211 23:33:59.224201   93907 cni.go:84] Creating CNI manager for ""
	I1211 23:33:59.224271   93907 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1211 23:33:59.224284   93907 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1211 23:33:59.224358   93907 start.go:340] cluster config:
	{Name:download-only-596435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-596435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:33:59.224489   93907 iso.go:125] acquiring lock: {Name:mkdc1af1a0d71db46a3b244a0831eec736a676ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1211 23:33:59.226316   93907 out.go:97] Starting "download-only-596435" primary control-plane node in "download-only-596435" cluster
	I1211 23:33:59.226333   93907 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:33:59.987632   93907 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1211 23:33:59.987705   93907 cache.go:56] Caching tarball of preloaded images
	I1211 23:33:59.987929   93907 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1211 23:33:59.989966   93907 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1211 23:33:59.989984   93907 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1211 23:34:00.144332   93907 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20083-86355/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-596435 host does not exist
	  To start a cluster, run: "minikube start -p download-only-596435"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-596435
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1211 23:34:17.237510   93600 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-922560 --alsologtostderr --binary-mirror http://127.0.0.1:39457 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-922560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-922560
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (88.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-382616 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-382616 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.203851558s)
helpers_test.go:175: Cleaning up "offline-crio-382616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-382616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-382616: (1.079565167s)
--- PASS: TestOffline (88.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-021354
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-021354: exit status 85 (57.529308ms)

                                                
                                                
-- stdout --
	* Profile "addons-021354" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-021354"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-021354
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-021354: exit status 85 (57.646069ms)

                                                
                                                
-- stdout --
	* Profile "addons-021354" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-021354"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (206.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-021354 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-021354 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m26.315359584s)
--- PASS: TestAddons/Setup (206.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (1.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-021354 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-021354 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-021354 get secret gcp-auth -n new-namespace: exit status 1 (77.584761ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-021354 logs -l app=gcp-auth -n gcp-auth
I1211 23:37:44.723340   93600 retry.go:31] will retry after 1.648876987s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/12/11 23:37:43 GCP Auth Webhook started!
	2024/12/11 23:37:44 Ready to marshal response ...
	2024/12/11 23:37:44 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-021354 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (1.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (13.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-021354 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-021354 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5f37f102-2cd3-45d7-a36e-58954eec3bcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5f37f102-2cd3-45d7-a36e-58954eec3bcb] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 13.004625596s
addons_test.go:633: (dbg) Run:  kubectl --context addons-021354 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-021354 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-021354 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (13.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.263126ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-9rj9b" [0eebcfc6-7414-4613-bf0e-42a424a43722] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004926631s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x2lv7" [8128c544-09f7-4769-85c1-30a0a916ca57] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004502818s
addons_test.go:331: (dbg) Run:  kubectl --context addons-021354 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-021354 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-021354 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.434346318s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 ip
2024/12/11 23:38:27 [DEBUG] GET http://192.168.39.225:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.23s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dcz9n" [ae6014cc-c32d-4f72-84be-a2d857bbc6e7] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005248205s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 addons disable inspektor-gadget --alsologtostderr -v=1: (5.725935201s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1211 23:38:35.318338   93600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1211 23:38:35.323505   93600 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1211 23:38:35.323529   93600 kapi.go:107] duration metric: took 5.22063ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.228674ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-021354 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-021354 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c81774f3-4033-470f-bf6b-cc8886301d74] Pending
helpers_test.go:344: "task-pv-pod" [c81774f3-4033-470f-bf6b-cc8886301d74] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c81774f3-4033-470f-bf6b-cc8886301d74] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004630025s
addons_test.go:511: (dbg) Run:  kubectl --context addons-021354 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-021354 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-021354 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-021354 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-021354 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-021354 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-021354 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9d6b0ccf-5952-4814-a9ab-a8743c2e3c01] Pending
helpers_test.go:344: "task-pv-pod-restore" [9d6b0ccf-5952-4814-a9ab-a8743c2e3c01] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9d6b0ccf-5952-4814-a9ab-a8743c2e3c01] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004704578s
addons_test.go:553: (dbg) Run:  kubectl --context addons-021354 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-021354 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-021354 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.873494456s)
--- PASS: TestAddons/parallel/CSI (60.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-021354 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-l74wv" [83eea39e-3251-4372-8d48-6cd1c0f06ed3] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-l74wv" [83eea39e-3251-4372-8d48-6cd1c0f06ed3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-l74wv" [83eea39e-3251-4372-8d48-6cd1c0f06ed3] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004687637s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 addons disable headlamp --alsologtostderr -v=1: (5.969904782s)
--- PASS: TestAddons/parallel/Headlamp (20.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-w2p74" [38d5b1b4-aa79-4100-a0ca-c7e681d2f5fa] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005159216s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (17.51s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-021354 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-021354 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4800cb36-d4ea-4278-a73f-74ef5730ac20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4800cb36-d4ea-4278-a73f-74ef5730ac20] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4800cb36-d4ea-4278-a73f-74ef5730ac20] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004577153s
addons_test.go:906: (dbg) Run:  kubectl --context addons-021354 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 ssh "cat /opt/local-path-provisioner/pvc-6ce29942-9383-4c5e-b256-1d3d7149a74d_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-021354 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-021354 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (17.51s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9qfkl" [fb3a5825-e9dc-42d8-ba09-f0d94c314d72] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004434858s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.81s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-59hnw" [13e36efa-e2e7-40ba-89e3-149645c1a02d] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003426194s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-021354 addons disable yakd --alsologtostderr -v=1: (5.752913419s)
--- PASS: TestAddons/parallel/Yakd (10.76s)

                                                
                                    
x
+
TestCertOptions (62.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-000053 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1212 00:52:46.618376   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:52:55.697627   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-000053 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m1.053329741s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-000053 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-000053 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-000053 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-000053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-000053
--- PASS: TestCertOptions (62.37s)

                                                
                                    
x
+
TestCertExpiration (262.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-112531 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-112531 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m3.312981852s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-112531 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-112531 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (18.490642246s)
helpers_test.go:175: Cleaning up "cert-expiration-112531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-112531
--- PASS: TestCertExpiration (262.58s)

                                                
                                    
x
+
TestForceSystemdFlag (76.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-641782 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-641782 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.914195093s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-641782 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-641782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-641782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-641782: (1.115231299s)
--- PASS: TestForceSystemdFlag (76.31s)

                                                
                                    
x
+
TestForceSystemdEnv (56.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-923531 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-923531 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (55.193652688s)
helpers_test.go:175: Cleaning up "force-systemd-env-923531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-923531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-923531: (1.02980011s)
--- PASS: TestForceSystemdEnv (56.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1212 00:50:44.467360   93600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:50:44.467536   93600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1212 00:50:44.506122   93600 install.go:62] docker-machine-driver-kvm2: exit status 1
W1212 00:50:44.506551   93600 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1212 00:50:44.506604   93600 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate286435556/001/docker-machine-driver-kvm2
I1212 00:50:45.099743   93600 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate286435556/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5318040 0x5318040 0x5318040 0x5318040 0x5318040 0x5318040 0x5318040] Decompressors:map[bz2:0xc0000fcd10 gz:0xc0000fcd18 tar:0xc0000fccb0 tar.bz2:0xc0000fccc0 tar.gz:0xc0000fccd0 tar.xz:0xc0000fcce0 tar.zst:0xc0000fccf0 tbz2:0xc0000fccc0 tgz:0xc0000fccd0 txz:0xc0000fcce0 tzst:0xc0000fccf0 xz:0xc0000fcd30 zip:0xc0000fce10 zst:0xc0000fcd38] Getters:map[file:0xc001b17840 http:0xc000729540 https:0xc000729590] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1212 00:50:45.099811   93600 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate286435556/001/docker-machine-driver-kvm2
I1212 00:50:49.226142   93600 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1212 00:50:49.226248   93600 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1212 00:50:49.257322   93600 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1212 00:50:49.257365   93600 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1212 00:50:49.257455   93600 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1212 00:50:49.257491   93600 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate286435556/002/docker-machine-driver-kvm2
I1212 00:50:49.591691   93600 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate286435556/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5318040 0x5318040 0x5318040 0x5318040 0x5318040 0x5318040 0x5318040] Decompressors:map[bz2:0xc0000fcd10 gz:0xc0000fcd18 tar:0xc0000fccb0 tar.bz2:0xc0000fccc0 tar.gz:0xc0000fccd0 tar.xz:0xc0000fcce0 tar.zst:0xc0000fccf0 tbz2:0xc0000fccc0 tgz:0xc0000fccd0 txz:0xc0000fcce0 tzst:0xc0000fccf0 xz:0xc0000fcd30 zip:0xc0000fce10 zst:0xc0000fcd38] Getters:map[file:0xc0016575b0 http:0xc0007c82d0 https:0xc0007c8320] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I1212 00:50:49.591755   93600 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate286435556/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (8.05s)

                                                
                                    
x
+
TestErrorSpam/setup (43.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-198856 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-198856 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-198856 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-198856 --driver=kvm2  --container-runtime=crio: (43.787725279s)
--- PASS: TestErrorSpam/setup (43.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 pause
--- PASS: TestErrorSpam/pause (1.61s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (95.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 stop
E1211 23:47:46.621254   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:46.627654   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:46.639063   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:46.660461   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:46.701904   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:46.783354   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:46.944791   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:47.266509   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:47.908609   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:49.190277   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:51.753223   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:47:56.875212   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:48:07.117252   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:48:27.599644   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 stop: (1m32.499035886s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 stop: (1.346771603s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-198856 --log_dir /tmp/nospam-198856 stop: (1.20540541s)
--- PASS: TestErrorSpam/stop (95.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20083-86355/.minikube/files/etc/test/nested/copy/93600/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-075541 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1211 23:49:08.561676   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-075541 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (57.592212364s)
--- PASS: TestFunctional/serial/StartWithProxy (57.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (397.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1211 23:50:05.680236   93600 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-075541 --alsologtostderr -v=8
E1211 23:50:30.483810   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:52:46.617988   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1211 23:53:14.326824   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-075541 --alsologtostderr -v=8: (6m37.630332032s)
functional_test.go:663: soft start took 6m37.631101245s for "functional-075541" cluster.
I1211 23:56:43.311021   93600 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (397.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-075541 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 cache add registry.k8s.io/pause:3.1: (1.191643061s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 cache add registry.k8s.io/pause:3.3: (1.364484014s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 cache add registry.k8s.io/pause:latest: (1.219623384s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-075541 /tmp/TestFunctionalserialCacheCmdcacheadd_local3272914651/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cache add minikube-local-cache-test:functional-075541
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 cache add minikube-local-cache-test:functional-075541: (2.492541926s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cache delete minikube-local-cache-test:functional-075541
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-075541
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.028482ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 cache reload: (1.003195151s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 kubectl -- --context functional-075541 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-075541 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-075541 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-075541 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.008864287s)
functional_test.go:761: restart took 54.009014843s for "functional-075541" cluster.
I1211 23:57:46.393144   93600 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (54.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-075541 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 logs
E1211 23:57:46.617579   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 logs: (1.551929551s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 logs --file /tmp/TestFunctionalserialLogsFileCmd3428977059/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 logs --file /tmp/TestFunctionalserialLogsFileCmd3428977059/001/logs.txt: (1.513271362s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-075541 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-075541
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-075541: exit status 115 (298.470071ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.42:30483 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-075541 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-075541 delete -f testdata/invalidsvc.yaml: (1.834520631s)
--- PASS: TestFunctional/serial/InvalidService (5.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 config get cpus: exit status 14 (63.48507ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 config get cpus: exit status 14 (45.778533ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-075541 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-075541 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 104979: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-075541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-075541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.774887ms)

                                                
                                                
-- stdout --
	* [functional-075541] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:24.822872  104884 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:58:24.823120  104884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:24.823130  104884 out.go:358] Setting ErrFile to fd 2...
	I1211 23:58:24.823137  104884 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:24.823341  104884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:58:24.823926  104884 out.go:352] Setting JSON to false
	I1211 23:58:24.824871  104884 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9647,"bootTime":1733951858,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:58:24.824971  104884 start.go:139] virtualization: kvm guest
	I1211 23:58:24.826949  104884 out.go:177] * [functional-075541] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1211 23:58:24.828377  104884 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:58:24.828382  104884 notify.go:220] Checking for updates...
	I1211 23:58:24.831365  104884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:58:24.832603  104884 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:58:24.833837  104884 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:24.835220  104884 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:58:24.836743  104884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:58:24.838445  104884 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:58:24.838848  104884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:24.838906  104884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:24.854414  104884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37453
	I1211 23:58:24.854886  104884 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:24.855456  104884 main.go:141] libmachine: Using API Version  1
	I1211 23:58:24.855486  104884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:24.855827  104884 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:24.856025  104884 main.go:141] libmachine: (functional-075541) Calling .DriverName
	I1211 23:58:24.856267  104884 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:58:24.856561  104884 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:24.856600  104884 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:24.871474  104884 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I1211 23:58:24.871956  104884 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:24.872491  104884 main.go:141] libmachine: Using API Version  1
	I1211 23:58:24.872519  104884 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:24.872830  104884 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:24.873003  104884 main.go:141] libmachine: (functional-075541) Calling .DriverName
	I1211 23:58:24.907222  104884 out.go:177] * Using the kvm2 driver based on existing profile
	I1211 23:58:24.908506  104884 start.go:297] selected driver: kvm2
	I1211 23:58:24.908523  104884 start.go:901] validating driver "kvm2" against &{Name:functional-075541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-075541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:58:24.908657  104884 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:58:24.910979  104884 out.go:201] 
	W1211 23:58:24.912239  104884 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1211 23:58:24.913373  104884 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-075541 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-075541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-075541 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.983199ms)

                                                
                                                
-- stdout --
	* [functional-075541] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1211 23:58:19.540327  104420 out.go:345] Setting OutFile to fd 1 ...
	I1211 23:58:19.540431  104420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:19.540436  104420 out.go:358] Setting ErrFile to fd 2...
	I1211 23:58:19.540441  104420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1211 23:58:19.540717  104420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1211 23:58:19.541269  104420 out.go:352] Setting JSON to false
	I1211 23:58:19.542120  104420 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":9642,"bootTime":1733951858,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1211 23:58:19.542217  104420 start.go:139] virtualization: kvm guest
	I1211 23:58:19.544559  104420 out.go:177] * [functional-075541] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1211 23:58:19.546115  104420 out.go:177]   - MINIKUBE_LOCATION=20083
	I1211 23:58:19.546136  104420 notify.go:220] Checking for updates...
	I1211 23:58:19.548663  104420 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1211 23:58:19.550049  104420 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1211 23:58:19.551363  104420 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1211 23:58:19.552904  104420 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1211 23:58:19.554207  104420 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1211 23:58:19.555852  104420 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1211 23:58:19.556238  104420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:19.556308  104420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:19.570932  104420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I1211 23:58:19.571398  104420 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:19.572037  104420 main.go:141] libmachine: Using API Version  1
	I1211 23:58:19.572057  104420 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:19.572428  104420 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:19.572594  104420 main.go:141] libmachine: (functional-075541) Calling .DriverName
	I1211 23:58:19.572836  104420 driver.go:394] Setting default libvirt URI to qemu:///system
	I1211 23:58:19.573127  104420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1211 23:58:19.573164  104420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1211 23:58:19.587731  104420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44825
	I1211 23:58:19.588193  104420 main.go:141] libmachine: () Calling .GetVersion
	I1211 23:58:19.588760  104420 main.go:141] libmachine: Using API Version  1
	I1211 23:58:19.588782  104420 main.go:141] libmachine: () Calling .SetConfigRaw
	I1211 23:58:19.589093  104420 main.go:141] libmachine: () Calling .GetMachineName
	I1211 23:58:19.589253  104420 main.go:141] libmachine: (functional-075541) Calling .DriverName
	I1211 23:58:19.621740  104420 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1211 23:58:19.622981  104420 start.go:297] selected driver: kvm2
	I1211 23:58:19.623002  104420 start.go:901] validating driver "kvm2" against &{Name:functional-075541 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20083/minikube-v1.34.0-1733936888-20083-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-075541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1211 23:58:19.623133  104420 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1211 23:58:19.625220  104420 out.go:201] 
	W1211 23:58:19.626430  104420 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1211 23:58:19.627696  104420 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (50.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-075541 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-075541 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5s4r5" [bf286851-36cd-4e98-990f-944cf3864774] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-5s4r5" [bf286851-36cd-4e98-990f-944cf3864774] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 50.004350512s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.42:32523
functional_test.go:1675: http://192.168.39.42:32523: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-5s4r5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.42:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.42:32523
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (50.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5917c91e-11da-4998-a50d-d92bc7bfe42d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003807721s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-075541 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-075541 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-075541 get pvc myclaim -o=json
I1211 23:58:02.233798   93600 retry.go:31] will retry after 1.620996559s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:2a93bde9-2758-4ea3-afa1-1751eb68adc5 ResourceVersion:635 Generation:0 CreationTimestamp:2024-12-11 23:58:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc000983c50 VolumeMode:0xc000983c60 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-075541 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-075541 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8abb0933-7c11-41ee-abd4-728599dafc05] Pending
helpers_test.go:344: "sp-pod" [8abb0933-7c11-41ee-abd4-728599dafc05] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8abb0933-7c11-41ee-abd4-728599dafc05] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.004448177s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-075541 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-075541 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-075541 delete -f testdata/storage-provisioner/pod.yaml: (1.243369767s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-075541 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c85cf9e2-23f4-4449-89c6-c76b423a44ee] Pending
helpers_test.go:344: "sp-pod" [c85cf9e2-23f4-4449-89c6-c76b423a44ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c85cf9e2-23f4-4449-89c6-c76b423a44ee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00461733s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-075541 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh -n functional-075541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cp functional-075541:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd879636245/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh -n functional-075541 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh -n functional-075541 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-075541 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-twvc4" [faa80e69-8783-4963-adda-86f80f98a6ae] Pending
helpers_test.go:344: "mysql-6cdb49bbb-twvc4" [faa80e69-8783-4963-adda-86f80f98a6ae] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-twvc4" [faa80e69-8783-4963-adda-86f80f98a6ae] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.00888643s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-075541 exec mysql-6cdb49bbb-twvc4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-075541 exec mysql-6cdb49bbb-twvc4 -- mysql -ppassword -e "show databases;": exit status 1 (157.148593ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1211 23:58:15.863936   93600 retry.go:31] will retry after 1.499966537s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-075541 exec mysql-6cdb49bbb-twvc4 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-075541 exec mysql-6cdb49bbb-twvc4 -- mysql -ppassword -e "show databases;": exit status 1 (491.599279ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1211 23:58:17.856070   93600 retry.go:31] will retry after 1.438374358s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-075541 exec mysql-6cdb49bbb-twvc4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/93600/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /etc/test/nested/copy/93600/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/93600.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /etc/ssl/certs/93600.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/93600.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /usr/share/ca-certificates/93600.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/936002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /etc/ssl/certs/936002.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/936002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /usr/share/ca-certificates/936002.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-075541 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh "sudo systemctl is-active docker": exit status 1 (232.756833ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh "sudo systemctl is-active containerd": exit status 1 (213.141034ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-075541 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-075541
localhost/kicbase/echo-server:functional-075541
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-075541 image ls --format short --alsologtostderr:
I1211 23:58:34.555930  105623 out.go:345] Setting OutFile to fd 1 ...
I1211 23:58:34.556049  105623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:34.556058  105623 out.go:358] Setting ErrFile to fd 2...
I1211 23:58:34.556063  105623 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:34.556258  105623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
I1211 23:58:34.556858  105623 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:34.556962  105623 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:34.557336  105623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:34.557376  105623 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:34.573981  105623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
I1211 23:58:34.574484  105623 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:34.575110  105623 main.go:141] libmachine: Using API Version  1
I1211 23:58:34.575133  105623 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:34.575529  105623 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:34.575714  105623 main.go:141] libmachine: (functional-075541) Calling .GetState
I1211 23:58:34.577599  105623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:34.577641  105623 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:34.592808  105623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
I1211 23:58:34.593245  105623 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:34.593768  105623 main.go:141] libmachine: Using API Version  1
I1211 23:58:34.593801  105623 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:34.594094  105623 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:34.594248  105623 main.go:141] libmachine: (functional-075541) Calling .DriverName
I1211 23:58:34.594454  105623 ssh_runner.go:195] Run: systemctl --version
I1211 23:58:34.594480  105623 main.go:141] libmachine: (functional-075541) Calling .GetSSHHostname
I1211 23:58:34.597313  105623 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:34.597711  105623 main.go:141] libmachine: (functional-075541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:55:f7", ip: ""} in network mk-functional-075541: {Iface:virbr1 ExpiryTime:2024-12-12 00:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:55:f7 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:functional-075541 Clientid:01:52:54:00:c4:55:f7}
I1211 23:58:34.597742  105623 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined IP address 192.168.39.42 and MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:34.597877  105623 main.go:141] libmachine: (functional-075541) Calling .GetSSHPort
I1211 23:58:34.598056  105623 main.go:141] libmachine: (functional-075541) Calling .GetSSHKeyPath
I1211 23:58:34.598226  105623 main.go:141] libmachine: (functional-075541) Calling .GetSSHUsername
I1211 23:58:34.598378  105623 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/functional-075541/id_rsa Username:docker}
I1211 23:58:34.682743  105623 ssh_runner.go:195] Run: sudo crictl images --output json
I1211 23:58:34.727085  105623 main.go:141] libmachine: Making call to close driver server
I1211 23:58:34.727100  105623 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:34.727407  105623 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:34.727427  105623 main.go:141] libmachine: Making call to close connection to plugin binary
I1211 23:58:34.727490  105623 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
I1211 23:58:34.727516  105623 main.go:141] libmachine: Making call to close driver server
I1211 23:58:34.727534  105623 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:34.727803  105623 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
I1211 23:58:34.727843  105623 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:34.727859  105623 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-075541 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/kicbase/echo-server           | functional-075541  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-075541  | a95dfd9fc6fc0 | 3.33kB |
| localhost/my-image                      | functional-075541  | e855ad4299747 | 1.47MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-075541 image ls --format table --alsologtostderr:
I1211 23:58:41.318840  105780 out.go:345] Setting OutFile to fd 1 ...
I1211 23:58:41.319090  105780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:41.319099  105780 out.go:358] Setting ErrFile to fd 2...
I1211 23:58:41.319105  105780 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:41.319313  105780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
I1211 23:58:41.319947  105780 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:41.320065  105780 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:41.320422  105780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:41.320467  105780 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:41.334346  105780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
I1211 23:58:41.334833  105780 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:41.335444  105780 main.go:141] libmachine: Using API Version  1
I1211 23:58:41.335470  105780 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:41.335878  105780 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:41.336064  105780 main.go:141] libmachine: (functional-075541) Calling .GetState
I1211 23:58:41.338217  105780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:41.338257  105780 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:41.353474  105780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40053
I1211 23:58:41.354004  105780 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:41.354532  105780 main.go:141] libmachine: Using API Version  1
I1211 23:58:41.354548  105780 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:41.354868  105780 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:41.355052  105780 main.go:141] libmachine: (functional-075541) Calling .DriverName
I1211 23:58:41.355279  105780 ssh_runner.go:195] Run: systemctl --version
I1211 23:58:41.355308  105780 main.go:141] libmachine: (functional-075541) Calling .GetSSHHostname
I1211 23:58:41.358207  105780 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:41.358595  105780 main.go:141] libmachine: (functional-075541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:55:f7", ip: ""} in network mk-functional-075541: {Iface:virbr1 ExpiryTime:2024-12-12 00:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:55:f7 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:functional-075541 Clientid:01:52:54:00:c4:55:f7}
I1211 23:58:41.358622  105780 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined IP address 192.168.39.42 and MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:41.358798  105780 main.go:141] libmachine: (functional-075541) Calling .GetSSHPort
I1211 23:58:41.358973  105780 main.go:141] libmachine: (functional-075541) Calling .GetSSHKeyPath
I1211 23:58:41.359160  105780 main.go:141] libmachine: (functional-075541) Calling .GetSSHUsername
I1211 23:58:41.359303  105780 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/functional-075541/id_rsa Username:docker}
I1211 23:58:41.462884  105780 ssh_runner.go:195] Run: sudo crictl images --output json
I1211 23:58:41.546875  105780 main.go:141] libmachine: Making call to close driver server
I1211 23:58:41.546896  105780 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:41.547207  105780 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
I1211 23:58:41.547263  105780 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:41.547273  105780 main.go:141] libmachine: Making call to close connection to plugin binary
I1211 23:58:41.547280  105780 main.go:141] libmachine: Making call to close driver server
I1211 23:58:41.547289  105780 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:41.547556  105780 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:41.547577  105780 main.go:141] libmachine: Making call to close connection to plugin binary
I1211 23:58:41.547611  105780 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-075541 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256
:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a
0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4bdcb985456cb4484ed6b959d42455f7b9b1303a9c4b85d591185506cae73051","repoDigests":["docker.io/library/b3ec1d562f8bb067ebde0f2351fc174b2a90120b3d54fbe3f32dc5135f94ae75-tmp@sha256:c15b9a9fed931f9e36acb790a40dc866bf4b5ab1ee89a76ee5903162e2cf5445"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":[
"registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e
18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-075541"],"size":"4943877"},{"id":"a95dfd9fc6fc06ace8f36a97862d4c97ed4c2cf9a5f6489def0090014eb567a8","repoDigests":["localhost/minikube-local-cache-test@sha256:fc595bf858656caceb949552c579668a0776b9521557a117043a387373861aeb"],"repoTags":["localhost/minikube-local-cache-test:functional-075541"],"size":"3330"},{"id":"e855ad42997478abd02fb41b14ac2772383090c5623cab58414674a72baff2c9","repoDigests":["localhost/my-image@sha256:b34853e35eb94ab62a2dd52f0774f7594a3de6239de0a7efe462ab7ab5062997"],"repoTags":["localhost/my-image:functional-075541"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"]
,"size":"97846543"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a7
2c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"c69fa2e9cb
f5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-075541 image ls --format json --alsologtostderr:
I1211 23:58:41.295896  105770 out.go:345] Setting OutFile to fd 1 ...
I1211 23:58:41.296094  105770 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:41.296124  105770 out.go:358] Setting ErrFile to fd 2...
I1211 23:58:41.296141  105770 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:41.296696  105770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
I1211 23:58:41.297356  105770 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:41.297460  105770 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:41.297803  105770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:41.297850  105770 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:41.314378  105770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
I1211 23:58:41.314825  105770 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:41.315359  105770 main.go:141] libmachine: Using API Version  1
I1211 23:58:41.315386  105770 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:41.315758  105770 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:41.315968  105770 main.go:141] libmachine: (functional-075541) Calling .GetState
I1211 23:58:41.318076  105770 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:41.318136  105770 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:41.333748  105770 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
I1211 23:58:41.334258  105770 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:41.334894  105770 main.go:141] libmachine: Using API Version  1
I1211 23:58:41.334924  105770 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:41.335306  105770 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:41.335448  105770 main.go:141] libmachine: (functional-075541) Calling .DriverName
I1211 23:58:41.335637  105770 ssh_runner.go:195] Run: systemctl --version
I1211 23:58:41.335667  105770 main.go:141] libmachine: (functional-075541) Calling .GetSSHHostname
I1211 23:58:41.339023  105770 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:41.339507  105770 main.go:141] libmachine: (functional-075541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:55:f7", ip: ""} in network mk-functional-075541: {Iface:virbr1 ExpiryTime:2024-12-12 00:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:55:f7 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:functional-075541 Clientid:01:52:54:00:c4:55:f7}
I1211 23:58:41.339538  105770 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined IP address 192.168.39.42 and MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:41.339723  105770 main.go:141] libmachine: (functional-075541) Calling .GetSSHPort
I1211 23:58:41.339898  105770 main.go:141] libmachine: (functional-075541) Calling .GetSSHKeyPath
I1211 23:58:41.340044  105770 main.go:141] libmachine: (functional-075541) Calling .GetSSHUsername
I1211 23:58:41.340165  105770 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/functional-075541/id_rsa Username:docker}
I1211 23:58:41.446288  105770 ssh_runner.go:195] Run: sudo crictl images --output json
I1211 23:58:41.499987  105770 main.go:141] libmachine: Making call to close driver server
I1211 23:58:41.500005  105770 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:41.500336  105770 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:41.500358  105770 main.go:141] libmachine: Making call to close connection to plugin binary
I1211 23:58:41.500373  105770 main.go:141] libmachine: Making call to close driver server
I1211 23:58:41.500380  105770 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:41.500403  105770 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
I1211 23:58:41.500598  105770 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:41.500614  105770 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-075541 image ls --format yaml --alsologtostderr:
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-075541
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: a95dfd9fc6fc06ace8f36a97862d4c97ed4c2cf9a5f6489def0090014eb567a8
repoDigests:
- localhost/minikube-local-cache-test@sha256:fc595bf858656caceb949552c579668a0776b9521557a117043a387373861aeb
repoTags:
- localhost/minikube-local-cache-test:functional-075541
size: "3330"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-075541 image ls --format yaml --alsologtostderr:
I1211 23:58:34.780291  105648 out.go:345] Setting OutFile to fd 1 ...
I1211 23:58:34.780431  105648 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:34.780440  105648 out.go:358] Setting ErrFile to fd 2...
I1211 23:58:34.780447  105648 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:34.780632  105648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
I1211 23:58:34.781293  105648 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:34.781395  105648 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:34.781751  105648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:34.781803  105648 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:34.797180  105648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38225
I1211 23:58:34.797659  105648 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:34.798225  105648 main.go:141] libmachine: Using API Version  1
I1211 23:58:34.798246  105648 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:34.798613  105648 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:34.798811  105648 main.go:141] libmachine: (functional-075541) Calling .GetState
I1211 23:58:34.800548  105648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:34.800585  105648 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:34.815456  105648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
I1211 23:58:34.815874  105648 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:34.816331  105648 main.go:141] libmachine: Using API Version  1
I1211 23:58:34.816356  105648 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:34.816659  105648 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:34.816809  105648 main.go:141] libmachine: (functional-075541) Calling .DriverName
I1211 23:58:34.816992  105648 ssh_runner.go:195] Run: systemctl --version
I1211 23:58:34.817021  105648 main.go:141] libmachine: (functional-075541) Calling .GetSSHHostname
I1211 23:58:34.819628  105648 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:34.820118  105648 main.go:141] libmachine: (functional-075541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:55:f7", ip: ""} in network mk-functional-075541: {Iface:virbr1 ExpiryTime:2024-12-12 00:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:55:f7 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:functional-075541 Clientid:01:52:54:00:c4:55:f7}
I1211 23:58:34.820147  105648 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined IP address 192.168.39.42 and MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:34.820279  105648 main.go:141] libmachine: (functional-075541) Calling .GetSSHPort
I1211 23:58:34.820524  105648 main.go:141] libmachine: (functional-075541) Calling .GetSSHKeyPath
I1211 23:58:34.820669  105648 main.go:141] libmachine: (functional-075541) Calling .GetSSHUsername
I1211 23:58:34.820828  105648 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/functional-075541/id_rsa Username:docker}
I1211 23:58:34.906189  105648 ssh_runner.go:195] Run: sudo crictl images --output json
I1211 23:58:34.947421  105648 main.go:141] libmachine: Making call to close driver server
I1211 23:58:34.947434  105648 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:34.947808  105648 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:34.947812  105648 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
I1211 23:58:34.947841  105648 main.go:141] libmachine: Making call to close connection to plugin binary
I1211 23:58:34.947852  105648 main.go:141] libmachine: Making call to close driver server
I1211 23:58:34.947859  105648 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:34.948102  105648 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:34.948117  105648 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh pgrep buildkitd: exit status 1 (218.298276ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image build -t localhost/my-image:functional-075541 testdata/build --alsologtostderr
2024/12/11 23:58:40 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 image build -t localhost/my-image:functional-075541 testdata/build --alsologtostderr: (5.858432207s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-075541 image build -t localhost/my-image:functional-075541 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4bdcb985456
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-075541
--> e855ad42997
Successfully tagged localhost/my-image:functional-075541
e855ad42997478abd02fb41b14ac2772383090c5623cab58414674a72baff2c9
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-075541 image build -t localhost/my-image:functional-075541 testdata/build --alsologtostderr:
I1211 23:58:35.221466  105704 out.go:345] Setting OutFile to fd 1 ...
I1211 23:58:35.221574  105704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:35.221579  105704 out.go:358] Setting ErrFile to fd 2...
I1211 23:58:35.221583  105704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1211 23:58:35.221747  105704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
I1211 23:58:35.222379  105704 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:35.223023  105704 config.go:182] Loaded profile config "functional-075541": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1211 23:58:35.223398  105704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:35.223455  105704 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:35.238516  105704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44787
I1211 23:58:35.239046  105704 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:35.239640  105704 main.go:141] libmachine: Using API Version  1
I1211 23:58:35.239664  105704 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:35.240048  105704 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:35.240277  105704 main.go:141] libmachine: (functional-075541) Calling .GetState
I1211 23:58:35.242581  105704 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1211 23:58:35.242632  105704 main.go:141] libmachine: Launching plugin server for driver kvm2
I1211 23:58:35.257772  105704 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35939
I1211 23:58:35.258287  105704 main.go:141] libmachine: () Calling .GetVersion
I1211 23:58:35.258865  105704 main.go:141] libmachine: Using API Version  1
I1211 23:58:35.258902  105704 main.go:141] libmachine: () Calling .SetConfigRaw
I1211 23:58:35.259243  105704 main.go:141] libmachine: () Calling .GetMachineName
I1211 23:58:35.259440  105704 main.go:141] libmachine: (functional-075541) Calling .DriverName
I1211 23:58:35.259660  105704 ssh_runner.go:195] Run: systemctl --version
I1211 23:58:35.259691  105704 main.go:141] libmachine: (functional-075541) Calling .GetSSHHostname
I1211 23:58:35.262402  105704 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:35.262776  105704 main.go:141] libmachine: (functional-075541) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c4:55:f7", ip: ""} in network mk-functional-075541: {Iface:virbr1 ExpiryTime:2024-12-12 00:49:23 +0000 UTC Type:0 Mac:52:54:00:c4:55:f7 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:functional-075541 Clientid:01:52:54:00:c4:55:f7}
I1211 23:58:35.262825  105704 main.go:141] libmachine: (functional-075541) DBG | domain functional-075541 has defined IP address 192.168.39.42 and MAC address 52:54:00:c4:55:f7 in network mk-functional-075541
I1211 23:58:35.262973  105704 main.go:141] libmachine: (functional-075541) Calling .GetSSHPort
I1211 23:58:35.263148  105704 main.go:141] libmachine: (functional-075541) Calling .GetSSHKeyPath
I1211 23:58:35.263322  105704 main.go:141] libmachine: (functional-075541) Calling .GetSSHUsername
I1211 23:58:35.263477  105704 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/functional-075541/id_rsa Username:docker}
I1211 23:58:35.354563  105704 build_images.go:161] Building image from path: /tmp/build.251588133.tar
I1211 23:58:35.354640  105704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1211 23:58:35.372773  105704 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.251588133.tar
I1211 23:58:35.382654  105704 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.251588133.tar: stat -c "%s %y" /var/lib/minikube/build/build.251588133.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.251588133.tar': No such file or directory
I1211 23:58:35.382711  105704 ssh_runner.go:362] scp /tmp/build.251588133.tar --> /var/lib/minikube/build/build.251588133.tar (3072 bytes)
I1211 23:58:35.436696  105704 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.251588133
I1211 23:58:35.464015  105704 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.251588133 -xf /var/lib/minikube/build/build.251588133.tar
I1211 23:58:35.481594  105704 crio.go:315] Building image: /var/lib/minikube/build/build.251588133
I1211 23:58:35.481690  105704 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-075541 /var/lib/minikube/build/build.251588133 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1211 23:58:41.000892  105704 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-075541 /var/lib/minikube/build/build.251588133 --cgroup-manager=cgroupfs: (5.519170273s)
I1211 23:58:41.000987  105704 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.251588133
I1211 23:58:41.013165  105704 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.251588133.tar
I1211 23:58:41.025038  105704 build_images.go:217] Built localhost/my-image:functional-075541 from /tmp/build.251588133.tar
I1211 23:58:41.025073  105704 build_images.go:133] succeeded building to: functional-075541
I1211 23:58:41.025080  105704 build_images.go:134] failed building to: 
I1211 23:58:41.025150  105704 main.go:141] libmachine: Making call to close driver server
I1211 23:58:41.025172  105704 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:41.025489  105704 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:41.025513  105704 main.go:141] libmachine: Making call to close connection to plugin binary
I1211 23:58:41.025528  105704 main.go:141] libmachine: Making call to close driver server
I1211 23:58:41.025527  105704 main.go:141] libmachine: (functional-075541) DBG | Closing plugin on server side
I1211 23:58:41.025537  105704 main.go:141] libmachine: (functional-075541) Calling .Close
I1211 23:58:41.025719  105704 main.go:141] libmachine: Successfully made call to close driver server
I1211 23:58:41.025740  105704 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.593044526s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-075541
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "291.217754ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "55.840006ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image load --daemon kicbase/echo-server:functional-075541 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 image load --daemon kicbase/echo-server:functional-075541 --alsologtostderr: (1.324746843s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "290.806489ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "54.561851ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image load --daemon kicbase/echo-server:functional-075541 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Done: docker pull kicbase/echo-server:latest: (1.183497303s)
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-075541
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image load --daemon kicbase/echo-server:functional-075541 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 image load --daemon kicbase/echo-server:functional-075541 --alsologtostderr: (3.277400472s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image save kicbase/echo-server:functional-075541 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 image save kicbase/echo-server:functional-075541 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.810954004s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image rm kicbase/echo-server:functional-075541 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-075541 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.81900153s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-075541
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 image save --daemon kicbase/echo-server:functional-075541 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-075541
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-075541 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-075541 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-8r2t4" [70d653a2-6638-416c-bdaf-668463bb0b06] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-8r2t4" [70d653a2-6638-416c-bdaf-668463bb0b06] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.014778686s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdany-port283939923/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733961499633205910" to /tmp/TestFunctionalparallelMountCmdany-port283939923/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733961499633205910" to /tmp/TestFunctionalparallelMountCmdany-port283939923/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733961499633205910" to /tmp/TestFunctionalparallelMountCmdany-port283939923/001/test-1733961499633205910
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.224886ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 23:58:19.906780   93600 retry.go:31] will retry after 667.470615ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 11 23:58 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 11 23:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 11 23:58 test-1733961499633205910
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh cat /mount-9p/test-1733961499633205910
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-075541 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [39569e0c-4eaf-42cf-8047-38e6a1db61a6] Pending
helpers_test.go:344: "busybox-mount" [39569e0c-4eaf-42cf-8047-38e6a1db61a6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [39569e0c-4eaf-42cf-8047-38e6a1db61a6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [39569e0c-4eaf-42cf-8047-38e6a1db61a6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.006070045s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-075541 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdany-port283939923/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 service list -o json
functional_test.go:1494: Took "431.179719ms" to run "out/minikube-linux-amd64 -p functional-075541 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.42:30242
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.42:30242
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdspecific-port1005787780/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.207133ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 23:58:30.835794   93600 retry.go:31] will retry after 407.335023ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdspecific-port1005787780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh "sudo umount -f /mount-9p": exit status 1 (213.04879ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-075541 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdspecific-port1005787780/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup840207876/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup840207876/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup840207876/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T" /mount1: exit status 1 (262.495055ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1211 23:58:32.583644   93600 retry.go:31] will retry after 472.547635ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-075541 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-075541 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup840207876/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup840207876/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-075541 /tmp/TestFunctionalparallelMountCmdVerifyCleanup840207876/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-075541
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-075541
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-075541
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565823 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-565823 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m23.986425457s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (204.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-565823 -- rollout status deployment/busybox: (7.142436358s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-nsw2n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-s8nmx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-x4p94 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-nsw2n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-s8nmx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-x4p94 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-nsw2n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-s8nmx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-x4p94 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-nsw2n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-nsw2n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-s8nmx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-s8nmx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-x4p94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565823 -- exec busybox-7dff88458-x4p94 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-565823 -v=7 --alsologtostderr
E1212 00:02:46.617822   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:55.697759   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:55.704203   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:55.715560   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:55.737066   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:55.778300   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:55.859707   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:56.021272   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:56.342884   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:56.984883   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:02:58.267028   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:03:00.829017   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:03:05.951012   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:03:16.193324   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-565823 -v=7 --alsologtostderr: (57.542953772s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-565823 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp testdata/cp-test.txt ha-565823:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823:/home/docker/cp-test.txt ha-565823-m02:/home/docker/cp-test_ha-565823_ha-565823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test_ha-565823_ha-565823-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823:/home/docker/cp-test.txt ha-565823-m03:/home/docker/cp-test_ha-565823_ha-565823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test_ha-565823_ha-565823-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823:/home/docker/cp-test.txt ha-565823-m04:/home/docker/cp-test_ha-565823_ha-565823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test_ha-565823_ha-565823-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp testdata/cp-test.txt ha-565823-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m02:/home/docker/cp-test.txt ha-565823:/home/docker/cp-test_ha-565823-m02_ha-565823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test_ha-565823-m02_ha-565823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m02:/home/docker/cp-test.txt ha-565823-m03:/home/docker/cp-test_ha-565823-m02_ha-565823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test_ha-565823-m02_ha-565823-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m02:/home/docker/cp-test.txt ha-565823-m04:/home/docker/cp-test_ha-565823-m02_ha-565823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test_ha-565823-m02_ha-565823-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp testdata/cp-test.txt ha-565823-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt ha-565823:/home/docker/cp-test_ha-565823-m03_ha-565823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test_ha-565823-m03_ha-565823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt ha-565823-m02:/home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test_ha-565823-m03_ha-565823-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m03:/home/docker/cp-test.txt ha-565823-m04:/home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test_ha-565823-m03_ha-565823-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp testdata/cp-test.txt ha-565823-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3066525188/001/cp-test_ha-565823-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt ha-565823:/home/docker/cp-test_ha-565823-m04_ha-565823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823 "sudo cat /home/docker/cp-test_ha-565823-m04_ha-565823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt ha-565823-m02:/home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m02 "sudo cat /home/docker/cp-test_ha-565823-m04_ha-565823-m02.txt"
E1212 00:03:36.675638   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 cp ha-565823-m04:/home/docker/cp-test.txt ha-565823-m03:/home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 ssh -n ha-565823-m03 "sudo cat /home/docker/cp-test_ha-565823-m04_ha-565823-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 node delete m03 -v=7 --alsologtostderr
E1212 00:12:46.618265   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-565823 node delete m03 -v=7 --alsologtostderr: (15.948602702s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (272.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565823 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 00:17:46.618339   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:17:55.700012   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:19:18.763972   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-565823 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m32.221680212s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (272.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-565823 --control-plane -v=7 --alsologtostderr
E1212 00:20:49.690609   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-565823 --control-plane -v=7 --alsologtostderr: (1m17.709835933s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-565823 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (86.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-118708 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-118708 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m26.787390428s)
--- PASS: TestJSONOutput/start/Command (86.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-118708 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-118708 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-118708 --output=json --user=testUser
E1212 00:22:46.618312   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-118708 --output=json --user=testUser: (7.364703483s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-001846 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-001846 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.22201ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a40b9c99-c066-436e-872b-bc3aac94913d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-001846] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c0d629a4-47b1-4756-b062-771332091ae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20083"}}
	{"specversion":"1.0","id":"deaf60ce-9f3a-4847-a4a3-ffc13caa8d71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0970d6c8-d860-4af9-a6e3-29b71cdc90d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig"}}
	{"specversion":"1.0","id":"cc0916ca-d670-4579-841f-aa44bd40c0a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube"}}
	{"specversion":"1.0","id":"e09d44be-523d-4388-a6b0-a851c38e5d54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9efa3bbd-01a2-4981-81a9-0581284e397a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"87f8b9c7-818a-4a23-8eb6-00973f5c621c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-001846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-001846
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (93.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-073787 --driver=kvm2  --container-runtime=crio
E1212 00:22:55.700846   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-073787 --driver=kvm2  --container-runtime=crio: (44.84322795s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-104170 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-104170 --driver=kvm2  --container-runtime=crio: (45.836867325s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-073787
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-104170
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-104170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-104170
helpers_test.go:175: Cleaning up "first-073787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-073787
--- PASS: TestMinikubeProfile (93.69s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-954599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-954599 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.056530533s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-954599 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-954599 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-973229 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-973229 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.963232133s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-973229 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-973229 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-954599 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-973229 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-973229 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-973229
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-973229: (1.277804406s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-973229
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-973229: (23.860890434s)
--- PASS: TestMountStart/serial/RestartStopped (24.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-973229 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-973229 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-492537 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 00:27:46.617772   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-492537 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m0.147042925s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- rollout status deployment/busybox
E1212 00:27:55.700359   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-492537 -- rollout status deployment/busybox: (8.503520121s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-g9tvw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-zdpfs -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-g9tvw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-zdpfs -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-g9tvw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-zdpfs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-g9tvw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-g9tvw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-zdpfs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-492537 -- exec busybox-7dff88458-zdpfs -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-492537 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-492537 -v 3 --alsologtostderr: (54.111339614s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-492537 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp testdata/cp-test.txt multinode-492537:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3175371616/001/cp-test_multinode-492537.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537:/home/docker/cp-test.txt multinode-492537-m02:/home/docker/cp-test_multinode-492537_multinode-492537-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m02 "sudo cat /home/docker/cp-test_multinode-492537_multinode-492537-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537:/home/docker/cp-test.txt multinode-492537-m03:/home/docker/cp-test_multinode-492537_multinode-492537-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m03 "sudo cat /home/docker/cp-test_multinode-492537_multinode-492537-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp testdata/cp-test.txt multinode-492537-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3175371616/001/cp-test_multinode-492537-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt multinode-492537:/home/docker/cp-test_multinode-492537-m02_multinode-492537.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537 "sudo cat /home/docker/cp-test_multinode-492537-m02_multinode-492537.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537-m02:/home/docker/cp-test.txt multinode-492537-m03:/home/docker/cp-test_multinode-492537-m02_multinode-492537-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m03 "sudo cat /home/docker/cp-test_multinode-492537-m02_multinode-492537-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp testdata/cp-test.txt multinode-492537-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3175371616/001/cp-test_multinode-492537-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt multinode-492537:/home/docker/cp-test_multinode-492537-m03_multinode-492537.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537 "sudo cat /home/docker/cp-test_multinode-492537-m03_multinode-492537.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 cp multinode-492537-m03:/home/docker/cp-test.txt multinode-492537-m02:/home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 ssh -n multinode-492537-m02 "sudo cat /home/docker/cp-test_multinode-492537-m03_multinode-492537-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 node stop m03: (1.557231443s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-492537 status: exit status 7 (423.258266ms)

                                                
                                                
-- stdout --
	multinode-492537
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-492537-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-492537-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr: exit status 7 (428.475207ms)

                                                
                                                
-- stdout --
	multinode-492537
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-492537-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-492537-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:29:07.069336  123434 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:29:07.069470  123434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:07.069480  123434 out.go:358] Setting ErrFile to fd 2...
	I1212 00:29:07.069485  123434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:29:07.069688  123434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:29:07.069844  123434 out.go:352] Setting JSON to false
	I1212 00:29:07.069876  123434 mustload.go:65] Loading cluster: multinode-492537
	I1212 00:29:07.069965  123434 notify.go:220] Checking for updates...
	I1212 00:29:07.070269  123434 config.go:182] Loaded profile config "multinode-492537": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:29:07.070291  123434 status.go:174] checking status of multinode-492537 ...
	I1212 00:29:07.070710  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.070744  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.086414  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I1212 00:29:07.086865  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.087471  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.087502  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.087832  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.088008  123434 main.go:141] libmachine: (multinode-492537) Calling .GetState
	I1212 00:29:07.089651  123434 status.go:371] multinode-492537 host status = "Running" (err=<nil>)
	I1212 00:29:07.089671  123434 host.go:66] Checking if "multinode-492537" exists ...
	I1212 00:29:07.089959  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.089993  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.105040  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33167
	I1212 00:29:07.105461  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.105908  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.105926  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.106273  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.106447  123434 main.go:141] libmachine: (multinode-492537) Calling .GetIP
	I1212 00:29:07.109018  123434 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:29:07.109477  123434 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:29:07.109508  123434 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:29:07.109614  123434 host.go:66] Checking if "multinode-492537" exists ...
	I1212 00:29:07.110007  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.110057  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.125373  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46709
	I1212 00:29:07.125860  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.126320  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.126342  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.126707  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.126911  123434 main.go:141] libmachine: (multinode-492537) Calling .DriverName
	I1212 00:29:07.127074  123434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:07.127094  123434 main.go:141] libmachine: (multinode-492537) Calling .GetSSHHostname
	I1212 00:29:07.129404  123434 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:29:07.129795  123434 main.go:141] libmachine: (multinode-492537) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:52:d1", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:26:06 +0000 UTC Type:0 Mac:52:54:00:b1:52:d1 Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:multinode-492537 Clientid:01:52:54:00:b1:52:d1}
	I1212 00:29:07.129825  123434 main.go:141] libmachine: (multinode-492537) DBG | domain multinode-492537 has defined IP address 192.168.39.208 and MAC address 52:54:00:b1:52:d1 in network mk-multinode-492537
	I1212 00:29:07.129943  123434 main.go:141] libmachine: (multinode-492537) Calling .GetSSHPort
	I1212 00:29:07.130105  123434 main.go:141] libmachine: (multinode-492537) Calling .GetSSHKeyPath
	I1212 00:29:07.130234  123434 main.go:141] libmachine: (multinode-492537) Calling .GetSSHUsername
	I1212 00:29:07.130340  123434 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537/id_rsa Username:docker}
	I1212 00:29:07.215660  123434 ssh_runner.go:195] Run: systemctl --version
	I1212 00:29:07.221842  123434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:07.242074  123434 kubeconfig.go:125] found "multinode-492537" server: "https://192.168.39.208:8443"
	I1212 00:29:07.242114  123434 api_server.go:166] Checking apiserver status ...
	I1212 00:29:07.242157  123434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 00:29:07.255775  123434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1075/cgroup
	W1212 00:29:07.265402  123434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1075/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1212 00:29:07.265446  123434 ssh_runner.go:195] Run: ls
	I1212 00:29:07.269628  123434 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I1212 00:29:07.275103  123434 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I1212 00:29:07.275132  123434 status.go:463] multinode-492537 apiserver status = Running (err=<nil>)
	I1212 00:29:07.275143  123434 status.go:176] multinode-492537 status: &{Name:multinode-492537 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:29:07.275167  123434 status.go:174] checking status of multinode-492537-m02 ...
	I1212 00:29:07.275511  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.275577  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.290868  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I1212 00:29:07.291295  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.291751  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.291768  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.292125  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.292319  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .GetState
	I1212 00:29:07.294000  123434 status.go:371] multinode-492537-m02 host status = "Running" (err=<nil>)
	I1212 00:29:07.294031  123434 host.go:66] Checking if "multinode-492537-m02" exists ...
	I1212 00:29:07.294321  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.294361  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.309169  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33965
	I1212 00:29:07.309567  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.310068  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.310092  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.310413  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.310615  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .GetIP
	I1212 00:29:07.313385  123434 main.go:141] libmachine: (multinode-492537-m02) DBG | domain multinode-492537-m02 has defined MAC address 52:54:00:3a:e8:f4 in network mk-multinode-492537
	I1212 00:29:07.313820  123434 main.go:141] libmachine: (multinode-492537-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:e8:f4", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:27:15 +0000 UTC Type:0 Mac:52:54:00:3a:e8:f4 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-492537-m02 Clientid:01:52:54:00:3a:e8:f4}
	I1212 00:29:07.313852  123434 main.go:141] libmachine: (multinode-492537-m02) DBG | domain multinode-492537-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:3a:e8:f4 in network mk-multinode-492537
	I1212 00:29:07.313990  123434 host.go:66] Checking if "multinode-492537-m02" exists ...
	I1212 00:29:07.314314  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.314406  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.329119  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
	I1212 00:29:07.329530  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.330025  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.330049  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.330368  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.330561  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .DriverName
	I1212 00:29:07.330746  123434 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 00:29:07.330767  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .GetSSHHostname
	I1212 00:29:07.333094  123434 main.go:141] libmachine: (multinode-492537-m02) DBG | domain multinode-492537-m02 has defined MAC address 52:54:00:3a:e8:f4 in network mk-multinode-492537
	I1212 00:29:07.333488  123434 main.go:141] libmachine: (multinode-492537-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:e8:f4", ip: ""} in network mk-multinode-492537: {Iface:virbr1 ExpiryTime:2024-12-12 01:27:15 +0000 UTC Type:0 Mac:52:54:00:3a:e8:f4 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:multinode-492537-m02 Clientid:01:52:54:00:3a:e8:f4}
	I1212 00:29:07.333527  123434 main.go:141] libmachine: (multinode-492537-m02) DBG | domain multinode-492537-m02 has defined IP address 192.168.39.186 and MAC address 52:54:00:3a:e8:f4 in network mk-multinode-492537
	I1212 00:29:07.333656  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .GetSSHPort
	I1212 00:29:07.333828  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .GetSSHKeyPath
	I1212 00:29:07.333936  123434 main.go:141] libmachine: (multinode-492537-m02) Calling .GetSSHUsername
	I1212 00:29:07.334047  123434 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20083-86355/.minikube/machines/multinode-492537-m02/id_rsa Username:docker}
	I1212 00:29:07.415706  123434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 00:29:07.429935  123434 status.go:176] multinode-492537-m02 status: &{Name:multinode-492537-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 00:29:07.429987  123434 status.go:174] checking status of multinode-492537-m03 ...
	I1212 00:29:07.430414  123434 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1212 00:29:07.430462  123434 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1212 00:29:07.446922  123434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33889
	I1212 00:29:07.447322  123434 main.go:141] libmachine: () Calling .GetVersion
	I1212 00:29:07.447851  123434 main.go:141] libmachine: Using API Version  1
	I1212 00:29:07.447873  123434 main.go:141] libmachine: () Calling .SetConfigRaw
	I1212 00:29:07.448199  123434 main.go:141] libmachine: () Calling .GetMachineName
	I1212 00:29:07.448386  123434 main.go:141] libmachine: (multinode-492537-m03) Calling .GetState
	I1212 00:29:07.449901  123434 status.go:371] multinode-492537-m03 host status = "Stopped" (err=<nil>)
	I1212 00:29:07.449924  123434 status.go:384] host is not running, skipping remaining checks
	I1212 00:29:07.449930  123434 status.go:176] multinode-492537-m03 status: &{Name:multinode-492537-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 node start m03 -v=7 --alsologtostderr: (40.956098553s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-492537 node delete m03: (1.710307065s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (205.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-492537 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1212 00:37:46.618543   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:37:55.700968   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-492537 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m24.99706724s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-492537 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (205.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-492537
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-492537-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-492537-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.214202ms)

                                                
                                                
-- stdout --
	* [multinode-492537-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-492537-m02' is duplicated with machine name 'multinode-492537-m02' in profile 'multinode-492537'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-492537-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-492537-m03 --driver=kvm2  --container-runtime=crio: (42.853327009s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-492537
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-492537: exit status 80 (224.202688ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-492537 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-492537-m03 already exists in multinode-492537-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-492537-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.96s)

                                                
                                    
x
+
TestScheduledStopUnix (114.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-942223 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-942223 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.48288787s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942223 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-942223 -n scheduled-stop-942223
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942223 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1212 00:46:10.464785   93600 retry.go:31] will retry after 80.685µs: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.465932   93600 retry.go:31] will retry after 123.783µs: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.467065   93600 retry.go:31] will retry after 281.669µs: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.468196   93600 retry.go:31] will retry after 288.546µs: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.469360   93600 retry.go:31] will retry after 447.176µs: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.470485   93600 retry.go:31] will retry after 700.81µs: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.471610   93600 retry.go:31] will retry after 1.075052ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.472765   93600 retry.go:31] will retry after 1.299865ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.474955   93600 retry.go:31] will retry after 3.441791ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.479170   93600 retry.go:31] will retry after 3.039921ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.482327   93600 retry.go:31] will retry after 4.855185ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.487539   93600 retry.go:31] will retry after 4.872993ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.492769   93600 retry.go:31] will retry after 11.421024ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.505021   93600 retry.go:31] will retry after 19.209646ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
I1212 00:46:10.525281   93600 retry.go:31] will retry after 33.407359ms: open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/scheduled-stop-942223/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942223 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-942223 -n scheduled-stop-942223
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-942223
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-942223 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-942223
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-942223: exit status 7 (66.746341ms)

                                                
                                                
-- stdout --
	scheduled-stop-942223
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-942223 -n scheduled-stop-942223
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-942223 -n scheduled-stop-942223: exit status 7 (63.818257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-942223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-942223
--- PASS: TestScheduledStopUnix (114.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (188.78s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3145119036 start -p running-upgrade-438877 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1212 00:47:46.617942   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
E1212 00:47:55.697902   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3145119036 start -p running-upgrade-438877 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m2.419203501s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-438877 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-438877 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.780630735s)
helpers_test.go:175: Cleaning up "running-upgrade-438877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-438877
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-438877: (1.19355943s)
--- PASS: TestRunningBinaryUpgrade (188.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410816 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-410816 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (85.391949ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-410816] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410816 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410816 --driver=kvm2  --container-runtime=crio: (1m34.313839507s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-410816 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.57s)

                                                
                                    
x
+
TestPause/serial/Start (89.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-409734 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-409734 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m29.356067295s)
--- PASS: TestPause/serial/Start (89.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410816 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410816 --no-kubernetes --driver=kvm2  --container-runtime=crio: (37.832991605s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-410816 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-410816 status -o json: exit status 2 (242.85352ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-410816","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-410816
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410816 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410816 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.316570027s)
--- PASS: TestNoKubernetes/serial/Start (29.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-410816 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-410816 "sudo systemctl is-active --quiet service kubelet": exit status 1 (186.815984ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-410816
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-410816: (1.354727099s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (25.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-410816 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-410816 --driver=kvm2  --container-runtime=crio: (25.689780335s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (25.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.1s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-409734 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-409734 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.076702734s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-410816 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-410816 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.067112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-018985 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-018985 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (105.277294ms)

                                                
                                                
-- stdout --
	* [false-018985] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20083
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 00:50:37.388944  134067 out.go:345] Setting OutFile to fd 1 ...
	I1212 00:50:37.389054  134067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:37.389058  134067 out.go:358] Setting ErrFile to fd 2...
	I1212 00:50:37.389063  134067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1212 00:50:37.389238  134067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20083-86355/.minikube/bin
	I1212 00:50:37.389801  134067 out.go:352] Setting JSON to false
	I1212 00:50:37.390715  134067 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":12779,"bootTime":1733951858,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1212 00:50:37.390816  134067 start.go:139] virtualization: kvm guest
	I1212 00:50:37.392864  134067 out.go:177] * [false-018985] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1212 00:50:37.394212  134067 notify.go:220] Checking for updates...
	I1212 00:50:37.394223  134067 out.go:177]   - MINIKUBE_LOCATION=20083
	I1212 00:50:37.395663  134067 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 00:50:37.397205  134067 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20083-86355/kubeconfig
	I1212 00:50:37.398699  134067 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20083-86355/.minikube
	I1212 00:50:37.400099  134067 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1212 00:50:37.401520  134067 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 00:50:37.403267  134067 config.go:182] Loaded profile config "force-systemd-env-923531": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:50:37.403357  134067 config.go:182] Loaded profile config "kubernetes-upgrade-459384": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1212 00:50:37.403471  134067 config.go:182] Loaded profile config "pause-409734": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1212 00:50:37.403556  134067 driver.go:394] Setting default libvirt URI to qemu:///system
	I1212 00:50:37.439683  134067 out.go:177] * Using the kvm2 driver based on user configuration
	I1212 00:50:37.441033  134067 start.go:297] selected driver: kvm2
	I1212 00:50:37.441047  134067 start.go:901] validating driver "kvm2" against <nil>
	I1212 00:50:37.441061  134067 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 00:50:37.443232  134067 out.go:201] 
	W1212 00:50:37.444543  134067 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1212 00:50:37.445873  134067 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-018985 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-018985" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:49:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.205:8443
name: pause-409734
contexts:
- context:
cluster: pause-409734
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:49:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-409734
name: pause-409734
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-409734
user:
client-certificate: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/pause-409734/client.crt
client-key: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/pause-409734/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-018985

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-018985"

                                                
                                                
----------------------- debugLogs end: false-018985 [took: 3.089788422s] --------------------------------
helpers_test.go:175: Cleaning up "false-018985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-018985
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (128.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2684438296 start -p stopped-upgrade-213355 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2684438296 start -p stopped-upgrade-213355 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m3.241365024s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2684438296 -p stopped-upgrade-213355 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2684438296 -p stopped-upgrade-213355 stop: (2.388413633s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-213355 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-213355 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.005963344s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (128.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-409734 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-409734 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-409734 --output=json --layout=cluster: exit status 2 (236.37852ms)

                                                
                                                
-- stdout --
	{"Name":"pause-409734","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-409734","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-409734 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-409734 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.82s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-409734 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-409734 --alsologtostderr -v=5: (1.818880435s)
--- PASS: TestPause/serial/DeletePaused (1.82s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-213355
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-213355: (1.008258768s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-242725 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-242725 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m29.488745445s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-607268 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-607268 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m5.388199749s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (14.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-242725 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6ebf78a3-0f5a-4e9d-b594-f83a42986409] Pending
helpers_test.go:344: "busybox" [6ebf78a3-0f5a-4e9d-b594-f83a42986409] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6ebf78a3-0f5a-4e9d-b594-f83a42986409] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 14.004053759s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-242725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (14.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-607268 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c6b9f083-143a-4316-873f-97f24ddcad12] Pending
helpers_test.go:344: "busybox" [c6b9f083-143a-4316-873f-97f24ddcad12] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c6b9f083-143a-4316-873f-97f24ddcad12] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.004932515s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-607268 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-242725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-242725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068367821s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-242725 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-607268 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-607268 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.209981698s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-607268 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-076578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-076578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (59.669717899s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bfe3320f-40a0-438c-929c-9c4223f5da4a] Pending
helpers_test.go:344: "busybox" [bfe3320f-40a0-438c-929c-9c4223f5da4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bfe3320f-40a0-438c-929c-9c4223f5da4a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004049129s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-076578 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-076578 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (677.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-242725 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-242725 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m17.397289731s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-242725 -n no-preload-242725
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (677.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (611.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-607268 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-607268 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m11.275127268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-607268 -n embed-certs-607268
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (611.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (557.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-076578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-076578 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m17.322520519s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-076578 -n default-k8s-diff-port-076578
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (557.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-738445 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-738445 --alsologtostderr -v=3: (2.305696498s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738445 -n old-k8s-version-738445: exit status 7 (64.022681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-738445 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-819544 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-819544 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (49.261689947s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m24.700930005s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-819544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-819544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.137822287s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (92.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-819544 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-819544 --alsologtostderr -v=3: (1m32.637037956s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (92.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m12.108917855s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-018985 "pgrep -a kubelet"
I1212 01:25:13.897277   93600 config.go:182] Loaded profile config "auto-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-018985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qm6dl" [f825c8d0-7526-4100-b5f3-877fe53749df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:25:15.070785   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.077157   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.088493   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.109912   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.151714   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.233473   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.394868   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:15.716541   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:16.358276   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:25:17.640436   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qm6dl" [f825c8d0-7526-4100-b5f3-877fe53749df] Running
E1212 01:25:20.202333   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005304857s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1212 01:25:25.323679   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d8jgl" [876b8084-de22-4c87-b790-4f4b4a3d6f8e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.010060896s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-819544 -n newest-cni-819544
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-819544 -n newest-cni-819544: exit status 7 (94.471941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-819544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-018985 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-819544 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
I1212 01:25:34.668357   93600 config.go:182] Loaded profile config "kindnet-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-819544 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (37.425859685s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-819544 -n newest-cni-819544
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-018985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v9kjs" [43e9a704-fe20-4ac5-af8e-1d016387d099] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v9kjs" [43e9a704-fe20-4ac5-af8e-1d016387d099] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005008403s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m44.711921933s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (113.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m53.786090313s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (113.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (106.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m46.474100324s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (106.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-819544 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-819544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-819544 -n newest-cni-819544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-819544 -n newest-cni-819544: exit status 2 (249.824217ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-819544 -n newest-cni-819544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-819544 -n newest-cni-819544: exit status 2 (238.977982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-819544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-819544 -n newest-cni-819544
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-819544 -n newest-cni-819544
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)
E1212 01:28:17.657373   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (140.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1212 01:26:32.074938   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.081431   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.092849   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.114267   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.155709   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.237206   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.398837   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:32.720644   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:33.362911   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:34.644880   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:37.008835   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/no-preload-242725/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:37.206376   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:42.328091   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:52.570399   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:55.717530   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:55.723989   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:55.735433   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:55.757669   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:55.799124   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:55.880644   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:56.042366   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:56.364334   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:57.006314   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:26:58.288116   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:27:00.850205   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:27:05.971872   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:27:13.052073   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:27:16.213382   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m20.846658017s)
--- PASS: TestNetworkPlugins/group/flannel/Start (140.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4x97v" [a747dcaa-6b94-4a08-9c69-0aabfb97a393] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005855285s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-018985 "pgrep -a kubelet"
I1212 01:27:27.933394   93600 config.go:182] Loaded profile config "calico-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-018985 replace --force -f testdata/netcat-deployment.yaml
I1212 01:27:28.424155   93600 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z7hfv" [165df40e-e01f-43d1-a3c8-15d00e4d4c32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:27:29.700815   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-z7hfv" [165df40e-e01f-43d1-a3c8-15d00e4d4c32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005133113s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-018985 "pgrep -a kubelet"
I1212 01:27:36.164608   93600 config.go:182] Loaded profile config "custom-flannel-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-018985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6rggb" [f322a720-6c75-4524-b77e-cb4a05c75bb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:27:36.695676   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-6rggb" [f322a720-6c75-4524-b77e-cb4a05c75bb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005545613s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1212 01:27:46.617714   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/addons-021354/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-018985 "pgrep -a kubelet"
I1212 01:27:50.138621   93600 config.go:182] Loaded profile config "enable-default-cni-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-018985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d7f5r" [78b14bbe-bd3a-4235-99fe-2167a57411b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1212 01:27:54.013673   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/default-k8s-diff-port-076578/client.crt: no such file or directory" logger="UnhandledError"
E1212 01:27:55.698838   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/functional-075541/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d7f5r" [78b14bbe-bd3a-4235-99fe-2167a57411b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 14.005639939s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (14.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-018985 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m31.130361384s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rgn6q" [c3c3ace3-c673-4cf8-b8ea-90db47e7f126] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004587206s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-018985 "pgrep -a kubelet"
I1212 01:28:43.037836   93600 config.go:182] Loaded profile config "flannel-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-018985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dvjcs" [c835b6ae-701a-4d53-9856-5af700d9cef1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dvjcs" [c835b6ae-701a-4d53-9856-5af700d9cef1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.007267619s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-018985 "pgrep -a kubelet"
I1212 01:29:31.244303   93600 config.go:182] Loaded profile config "bridge-018985": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-018985 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r65hv" [6adb06d0-f05c-4845-bf4c-625ea81e920a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r65hv" [6adb06d0-f05c-4845-bf4c-625ea81e920a] Running
E1212 01:29:39.579510   93600 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/old-k8s-version-738445/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004251549s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-018985 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-018985 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/320)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.2/cached-images 0
15 TestDownloadOnly/v1.31.2/binaries 0
16 TestDownloadOnly/v1.31.2/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestGvisorAddon 0
178 TestImageBuild 0
205 TestKicCustomNetwork 0
206 TestKicExistingNetwork 0
207 TestKicCustomSubnet 0
208 TestKicStaticIP 0
240 TestChangeNoneUser 0
243 TestScheduledStopWindows 0
245 TestSkaffold 0
247 TestInsufficientStorage 0
251 TestMissingContainerUpgrade 0
260 TestStartStop/group/disable-driver-mounts 0.16
274 TestNetworkPlugins/group/kubenet 2.85
282 TestNetworkPlugins/group/cilium 3.75
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-021354 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-535684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-535684
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-018985 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-018985" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:49:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.205:8443
name: pause-409734
contexts:
- context:
cluster: pause-409734
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:49:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-409734
name: pause-409734
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-409734
user:
client-certificate: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/pause-409734/client.crt
client-key: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/pause-409734/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-018985

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-018985"

                                                
                                                
----------------------- debugLogs end: kubenet-018985 [took: 2.707298363s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-018985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-018985
--- SKIP: TestNetworkPlugins/group/kubenet (2.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-018985 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-018985" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20083-86355/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:49:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.205:8443
name: pause-409734
contexts:
- context:
cluster: pause-409734
extensions:
- extension:
last-update: Thu, 12 Dec 2024 00:49:45 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-409734
name: pause-409734
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-409734
user:
client-certificate: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/pause-409734/client.crt
client-key: /home/jenkins/minikube-integration/20083-86355/.minikube/profiles/pause-409734/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-018985

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-018985" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-018985"

                                                
                                                
----------------------- debugLogs end: cilium-018985 [took: 3.585419073s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-018985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-018985
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
Copied to clipboard